path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
collector.ipynb
###Markdown Part_1 - Data Collection 1.1. Get the list of books Scrap the links in a page containing list of booksThanks to this function, for each page that arrive in input as href, we are able to scrap the book's Urls in that. N.B: each page has 100 books, so we have to find 100 links for page ###Code def scrap_url(href, driver , nlp): driver.get(href) #time.sleep(5) page_soup = BeautifulSoup(driver.page_source, features="lxml") links = page_soup.find_all('a' , itemprop="url") lista_links = [] i = 2 for link in links: link_full = link.get('href') if (i % 2) == 0 : string1 = 'https://www.goodreads.com/en' link_full = string1 + link_full lista_links.append(link_full) i = i+1 return lista_links ###Output _____no_output_____ ###Markdown Take the book's linksThanks to this script we are able to read the web-pages that contain the list of best books on the web-site "https://www.goodreads.com" and, for each page, take the Url of the book we are interested in, saving them in a file called "lista_url.txt". In each page, we will scrap the links of 100 books**lista_url.txt** : This file will contain 30k rows, each row is the link of a book ###Code #chromedriver = r"C:\Users\thoma\Desktop\HW3_ADM\chromedriver_win32" driver = webdriver.Chrome(chromedriver) for i in range (1, 301) : href = "https://www.goodreads.com/list/show/1.Best_Books_Ever?page=" stringa2 = i stringa2 = str(stringa2) href = href + stringa2 nlp = spacy.load('en_core_web_sm') #Let's use the fuction able to find the urls in the page. urls = scrap_url(href,driver,nlp) with open('lista_url.txt', 'a') as f: for item in urls: f.write("%s\n" % item) ###Output _____no_output_____ ###Markdown 1.2. Crawl books Download the HTML pagesWith this function we are able to save in a folder "html_folder" all the books as .html page.It takes the page, as link from "lista_url.txt", and download it savingig as .html file in the setted folder/filepath.After this step, we have all the book's html-pages downloaded, and we are ready to scrap them. ###Code #filepath = r'C:\Users\thoma\Desktop\HW3_ADM\html_folder' #Da lanciare come sta settato ora per prendere nuovi file with open("lista_url.txt") as file_in: line_count = 1 # article_i at i-th row of "lista_url.txt" for link in islice(file_in, 1, 30000): page = requests.get(link, allow_redirects=True) contenuto = page.text soup = BeautifulSoup (contenuto , features = 'lxml') if line_count == 30002 : break string1 = "article_" string2 = str(line_count) string3 = ".html" title = string1 + string2 + string3 # We automatically assign to each book the name "article_i.html" with open (os.path.join(filepath, title), "w",encoding='utf-8' ) as f2: f2.write(str(soup)) line_count += 1 ###Output _____no_output_____
Aulas/Aula_02/.ipynb_checkpoints/Spark-I-Introduction-checkpoint.ipynb
###Markdown **2021/22** Introduction to Apache SparkIn this lecture we will introduce the Spark framework. Right now, the goal is to explain how it works and to highlight its potentiality.**Disclaimer**: Some content presented in this notebook e.g. images are based on references mentioned at the end of the notebook. **Context** In the past, computers got faster mainly due to processor speed increases. And most of the applications were designed to run in a single processor machine. But as more data was required to be processed and hardware limits were being tested, research efforts moved towards parallel processing and new programming models. **Apache Spark** - is an open-source distributed cluster-computing framework. It is designed for large-scale distributed data processing, with focus on speed and modularity;- provides in-memory storage for intermediate computations;- contain libraries with APIs for machine learning, SQL, stream processing and graph processing. Spark components and APIsSpark offers four components as libraries for diverse workloads in a unified stack.Code can be written in the languages Scala, SQL, Python, Java or R, which then is decomposed into bytecode to be executed in Java Virtual Machines (JVMs) across the cluster. ![image.png](attachment:image.png) There are both low-level and high-level APIs related to (distributed) collections of data. We may have collections of:* **Resilient Distributed Dataset (RDD)** * they are now consigned to low-level APIs* **DataFrame** * the most commom structured data - it simply represents a table of data with rows and columns* **Dataset** * collection of objects but only makes sense in the case of Scala and Java Further details are to be covered later on but we can highlight now that our focus will be on **DataFrames** Spark Core and Spark SQL EngineSpark Core contains basic functionalities for running jobs and that are needed by other components. Spark SQL Engine provides additional help to do so.Computations will ultimatelly convert into low-level RDD-based bytecode (in Scala) to be distributed and run in executors across the cluster. Spark SQLSpark SQL provides functions for manipulating large sets of distributed structured data using an SQL subset. (ANSI SQL:2003-compliant)It can also be used for **reading** and **writing** data to and from various structured formats and data sources, such as JavaScript Object Notation (JSON) files, CSV files, Parquet files (an increasingly popular file format that allows for storing a schema alongside the data), relational databases, Hive, and others. There is also a query optimization framework called Catalyst. Spark Structured StreamingSpark Structured Streaming is a framework for ingesting real-time streaming data from various sources, such as HDFS-based, Kafka, Flume, Twitter, ZeroMQ, as well as customized ones. Developers are able to combine and react in real time to both static and streaming data. A stream is perceived as a continuaslly growing structured table, upon against which queries are made as if it was a static table.Aspects of fault tolerance and late-data semantics are handled via Spark SQL core engine. Hence, developers are focussing on just writing streaming applications. Machine Learning MLlibSpark MLlib is a library of common machine-learning (ML) algorithms built on top of DataFrame-based APIs. Among other aspects, these APIs allow to extract or transform features, build pipelines (for training and evaluating) and persist models during deployment (for saving/reloading)Available ML algorithms include logistic regression, naïve Bayes classification, support vector machines (SVMs), decision trees, random forests, linear regression, k-means clustering, among others. Graph Processing GraphXSpark Graphx is a library for manipulating graphs, that is, data structures comprising vertices and the edges connecting them. It provides algorithms for building, analysing, connecting and traversing graphs. Among others, there are implementations of important algorithms of graph theory, such as page rank, connected components, shortest paths and singular value decomposition. (SVD) Execution in a distributed architectureA **Spark Application** consists of a **driver** program responsible for orchestrating parallel operations on the Spark cluster. The driver accesses the distributed components in the cluster (**executors** and **manager**) via a **SparkSession**. ![image.png](attachment:image.png) SparkSessionA SparkSession instance provides a single entry point to all functionalities.For a Spark application, one needs to create the SparkSession object if none is available, as described below. In that case, we can configure it according to ower own needs.But first, we have to make sure we can access **pyspark** from this notebook. One way to do so is to run the notebook using use a suitable kernel. That is why we have already set one: **PySpark**.For the time being there is no need to provide furter details about this kernel - it is just a file named *kernel.json* placed in a proper location and with some settings. ###Code from pyspark.sql import SparkSession # build our own SparkSession myspark = SparkSession\ .builder\ .appName("BigData")\ .config("spark.sql.shuffle.partitions",6)\ .config("spark.sql.repl.eagereval.enabled",True)\ .getOrCreate() # check it, including the link myspark # print SparkSession object print(myspark) # Example of usage: # creating a range of numbers, represented as a distributed collection numbers_to_n = myspark.range(1000000).toDF("Number") ###Output _____no_output_____ ###Markdown Cluster manager and executors* **Cluster manager** * responsible for managing the executors in the cluster of nodes on which the application runs, alongside allocating the requested resources * agnostic where ir runs as long as responsabilities above are met* **Spark executor** * runs on each worker node in the cluster * executors communicate with driver program and are responsible for executing tasks on the workers* **Deployment modes** * variety of configurations and environments available, as shown below: (just for reference) | Mode | Spark driver | Spark executor | Cluster manager ||:----------------|:----------------------------------------------------|:----------------|:-----------------|| Local | Runs on a single JVM, like a laptop or single node | Runs on the same JVM as the driver | Runs on the same host || Standalone | Can run on any node in the cluster | Each node in the cluster will launch its own executor | Can be allocated arbitrarily to any host in the cluster || YARN (client) | Runs on a client, not part of the cluster | YARN's NodeManager's container | YARN's Resource Manager works with YARN's Application Master to allocate the containers on NodeManagers for executors || YARN (cluster) | Runs with the YARN Application Master | Same as YARN client mode | Same as YARN client mode || Kubernetes | Runs in a Kubernetes pod | Each worker runs within its own pod | Kubernetes Master | Distributed data and partitions* Partitioning of data allows for efficient paralelism since every executor can perform work in parallel* Physical data is break up and distributed across storage as chunks called partitions, whether in HDFS or in cloud storage.* Each partition is treated as a dataframe in memory (logical data abstraction) * hence it is a collection of rows that sits on one physical machine of the cluster; * so if we have dataframes in our program we do not (for the most part) manipulate partitions individually - we simply specify high level transformations of data in the physical partitions, and Spark determines how this will play out across the cluster.* As much as possible, data locality is to be pursuit. It means that an executor is prefereably allocated a task that requires reading a partition closest to it in the network in order to minimize network bandwidth.**Question**: What happens if we have* multiple partitions but only one executor;* one partition but thousands of executors? Standalone application running in local modeConceptually, we prototype the application by running it locally with small datasets; then, for large datasets, we use more advanced deployment modes to take advantage of distributed and more powerful execution. Spark shells Spark provides four interpretative shells (windows) to carried out ad hoc data analysis:* pyspark* spark-shell* spark-sql* sparkRThey resemble their shell counterparts for the considered languages. The main difference now is that they have extra support for connecting to the cluster and to loading distributed data into worker's memoryNotice that, if using shells:* the driver is part of the shell* the SparkSession mentioned above is automatically created, accessible via the variable `spark`* they are exited pressing Ctrl-D Note: in accordance to the location of the Spark installation in our computer, we have set for the shell (terminal window) the following environment variables (in the file ~/.profile) export SPARK_HOME=/opt/spark export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin export PYSPARK_PYTHON=/usr/bin/python3 By the way, in Linux the command `which` is useful for checking where programms are installed. ###Code # run pyspark #in terminal #!pyspark ###Output _____no_output_____ ###Markdown **Stop running the previous cell. Here we can't do that much!** Example running in a `pyspark` shellReading a file and then showing top 10 lines, as well as the number of lines. It runs locally, in a single JVM. First, open an autonomous shell (Terminal window) and run the following commands, one by one: which pyspark pyspark --help pyspark spark.version 2+3 And then execute the commands (the provided file is located in the current directory) lines = spark.read.text("pyspark-help.txt") lines.show(10, truncate=False) lines.count() quit() Spark operations and related computationOperations on distributed data are of two types: **transformations** and **actions** Basic concepts* **Job**: parallel computation created by the driver, consisting of multiple tasks that gets spawned in response to actions, e.g save()* **Stage**: each job gets divided into smaller sets of tasks called stages, that depend on each other* **Task**: single unit of work or execution to be sent to a Spark executor (a task per core) ![image.png](attachment:image.png) Transformations In Spark core data structures are **Immutable**, that is, they cannot be changed after creation. If one wants to change a dataframe we need to instruct Spark how to do it. These are called transformations. Hence, transformations transform a DataFrame into a new one without altering the original data. So it returns a new one but transformed.Some examples are:|Transformation | Description||:-------|:-------||**orderBy()**|Returns a new DataFrame sorted by specific column(s)||**groupBy**|Groups the DataFrame using specified columns, so we can run aggregation on them||**filter()**|Filters rows using a given condition||**select()**|Returns a new DataFrame with select columns||**join()**|Joins with another DataFrame, using a given join expression| **Back to our myspark session...**Checking the content of a text file. ###Code !ls -la # strings = strings = myspark.read.text("pyspark-help.txt") strings.show(5,truncate= False) n = strings.count() n # filtering lines with a particular word, say pyspark # filtered = filtered = strings.filter(strings.value.contains("pyspark")) filtered.show(truncate=False) filtered ###Output +------------------------------+ |value | +------------------------------+ |Usage: ./bin/pyspark [options]| +------------------------------+ ###Markdown Types of transformationsTransformations can be:* Narrow * a single output partition can be computed from a single input partition (no exchange of data, all performed in memory) * examples are **filter()**, **contains()*** Wide * data from other partitions across the cluster is read in, combined, and written to disk * examples are **groupBy()**, **reduceBy()** Example**Reading structured data, filter some of them and then show the result but sorted**(The file is on the same folder as this notebook) ###Code ! ls -la # Prior, just let us check the file we are about to use (with help of Linux commands) ! head flights-US-2015.csv # Read the datafile into a DataFrame using the CSV format, # by inferring the schema and specifying that the file contains a header, # which provides column names for comma-separated fields # info_flights = info_flights = myspark.read.load("flights-US-2015.csv", format = "csv", sep = ",", header = True, inferSchema = True) # info_flights # or print(info_flights) info_flights info_flights.head(20) # check how many records we have in the DataFrame info_flights.count() # and showing some of them info_flights.show() # get routes from the United States # and try other options ... routes = info_flights.filter(info_flights.ORIGIN_COUNTRY_NAME == "United States") # show the routes routes.show() # show the routes ordered by flights routes_ordered = routes.orderBy("FLIGHTS", ascending = False) routes_ordered.show() ###Output +------------------+-------------------+-------+ | DEST_COUNTRY_NAME|ORIGIN_COUNTRY_NAME|FLIGHTS| +------------------+-------------------+-------+ | United States| United States| 370002| | Canada| United States| 8399| | Mexico| United States| 7140| | United Kingdom| United States| 2025| | Japan| United States| 1548| | Germany| United States| 1468| |Dominican Republic| United States| 1353| | South Korea| United States| 1048| | The Bahamas| United States| 955| | France| United States| 935| | Colombia| United States| 873| | Brazil| United States| 853| | Netherlands| United States| 776| | China| United States| 772| | Jamaica| United States| 666| | Costa Rica| United States| 588| | El Salvador| United States| 561| | Panama| United States| 510| | Cuba| United States| 466| | Spain| United States| 420| +------------------+-------------------+-------+ only showing top 20 rows ###Markdown Lazy evaluation and actions Spark uses lazy evaluation, that is, it waits until the very last moment to execute the graph of computational instructions established, that is, the plan of transformations that we would like to apply to the data.As results are not computed immediately, they are recorded as **lineage** (*trace of descendants*) and at later time in its execution plan, Spark may rearrange certain transformations, coalesce them, or optimize transformations into stages for more efficient execution of the entire flow. Only when an **action** is invoked or data is read/written to disk the lazy evaluation of all recorded transformations is triggered.An action is like a play button. We may have:* Actions to view data in the console.* Actions to collect data to native objects in the respective language.* Actions to write to output data sources.Some examples are:|Action | Description||:-------|:-------||**show()**|Prints the first rows to the console||**take(n)**|Returns the first rows as a list||**count()**|Returns the number of rows||**collect()**|Returns all the records as a list||**save()**|Saves the contents to a data source| ###Code # Using the variable numbers_to_n (a DataFrame) set before... even_numbers = numbers_to_n.where("number % 2 = 0") # why didn't return the output? even_numbers.explain() # or even_numbers.explain(extended=True) # count even_numbers.count() # get 5 of them even_numbers.take(5) # the 1st one even_numbers.first() # and show even_numbers.show() ###Output +------+ |Number| +------+ | 0| | 2| | 4| | 6| | 8| | 10| | 12| | 14| | 16| | 18| | 20| | 22| | 24| | 26| | 28| | 30| | 32| | 34| | 36| | 38| +------+ only showing top 20 rows ###Markdown Fault tolerance**Lineage** in the context of lazy evaluation and **data immutability** mentioned above gives resiliency in the event of failures as:* Spark records each transformation in its lineage;* DataFrames are immutable between transformations;then Spark can reproduce the original state by replaying the recorded lineage. Spark UISpark UI allow us to monitor the progress of a job. It displays information about the state of Spark jobs, its environment and the cluster state. So it is very useful for tuning and debugging. Usually Spark UI is available on port 4040 of the driver node. (If that port is occupied, another one is provided)In local mode: http://localhost:4040 in a web browser. PS: recall notebook cell above when myspark was checked. ###Code myspark.sparkContext.uiWebUrl # check where spark ui is running ###Output _____no_output_____ ###Markdown Check the link presented above after execution. ###Code # Let us stop the SparkSession myspark.stop() ###Output _____no_output_____ ###Markdown ExerciseOur goal now is to write down a Spark program that (i) reads a file containing flight data to and from United States and then (ii) provide answers to the following questions about the data that has been just read:1. How many records exist in the dataset?2. How many routes originate in countries with more than one?3. Give the number of flights in the busiest route?4. Which countries are the top 5 destinations? (by number of flights) The Spark program ###Code # Import the necessary libraries import sys from pyspark.sql import SparkSession from pyspark.sql.functions import count, max, sum # Build a SparkSession using the SparkSession APIs. If it does not exist one, then create an instance. # Notice that we can only have one per JVM myspark = SparkSession\ .builder\ .appName("Flights")\ .config("spark.sql.repl.eagereval.enabled",True)\ .getOrCreate() # alternatively we could have written # myspark = (SparkSession # .builder # .appName("Flights") # .getOrCreate()) # or # spark = SparkSession.builder.appName("Flights").getOrCreate()) ###Output _____no_output_____ ###Markdown As before, we are using DataFrame high-level APIs (Spark SQL could also have been used here but we leave it for the time being) ###Code # read the dataset # flight_data = # First, let us check the schema and the initial lines of the dataset. # We should always take this step # flight_data.printSchema() # flight_data.show(5) # Just a detail: to figure out how, for example, sorting by FLIGHTS would work # flight_data.sort("FLIGHTS").explain() # check the Spark physical plan ###Output _____no_output_____ ###Markdown **Before moving on, a note about reading data from a csv file:**Above, we have inferred the schema from the first line of the csv file. And by the way reading is a transformation not an action.But we could have set the schema programatically and then read the data from the file accordingly. When schema is inferred from a huge file this may take some time. So in those circunstances we may decide to set the schema programmatically. Questions to be answered ###Code # 1. How many records exist in the dataset? # 2. How many routes originate in countries with more than one? # 3. Give the number of flights in the busiest route? # 4. Which countries are the top 5 destinations? (by number of flights) # top_dest_countries_df = flight_data\ # show the results. As it is an action, it triggers the above query to be executed # print("Total = %d" % (top_dest_countries_df.count())) # Finally, stop the SparkSession #myspark.stop() ###Output _____no_output_____
notebooks/autoshap model example.ipynb
###Markdown * Check log ###Code !cat ../data/train_log_* # Carrega os dados salvos e gera os gráficos model.make_summary(n_high_contribution_cols = 5, n_high_contribution_interaction_cols = 3,show_plots=True,max_display=9) ###Output Loading data... Building dataframes... Making summary plots...
docs/source/notebooks/LoadDataBundle.ipynb
###Markdown Step1 - Set the ZIPLINE_ROOT ###Code import os os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '.zipline') os.listdir(os.environ['ZIPLINE_ROOT']) os.environ['ZIPLINE_TRADER_CONFIG'] = os.path.join(os.getcwd(), "./zipline-trader.yaml") with open(os.environ['ZIPLINE_TRADER_CONFIG'], 'r') as f: data = f.read() print(data[:20]) ###Output _____no_output_____ ###Markdown Step2 - Load your Bundle ###Code import zipline from zipline.data import bundles bundle_name = 'alpaca_api' bundle_data = bundles.load(bundle_name) ###Output _____no_output_____ ###Markdown Step3 - Create The Data Portal ###Code from zipline.pipeline.loaders import USEquityPricingLoader from zipline.utils.calendars import get_calendar from zipline.pipeline.data import USEquityPricing from zipline.data.data_portal import DataPortal import pandas as pd # Set the dataloader pricing_loader = USEquityPricingLoader.without_fx(bundle_data.equity_daily_bar_reader, bundle_data.adjustment_reader) # Define the function for the get_loader parameter def choose_loader(column): if column not in USEquityPricing.columns: raise Exception('Column not in USEquityPricing') return pricing_loader # Set the trading calendar trading_calendar = get_calendar('NYSE') start_date = pd.Timestamp('2019-07-05', tz='utc') end_date = pd.Timestamp('2020-11-13', tz='utc') # Create a data portal data_portal = DataPortal(bundle_data.asset_finder, trading_calendar = trading_calendar, first_trading_day = start_date, equity_daily_reader = bundle_data.equity_daily_bar_reader, adjustment_reader = bundle_data.adjustment_reader) ###Output _____no_output_____ ###Markdown Let's Get Some Historical Data ###Code equity = bundle_data.asset_finder.lookup_symbol("ACES", end_date) data_portal.get_history_window(assets=[equity], end_dt=end_date, bar_count=10, frequency='1d', field='close', data_frequency='daily') ###Output _____no_output_____ ###Markdown Step1 - Set the ZIPLINE_ROOT ###Code import os os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '/.zipline') os.listdir(os.environ['ZIPLINE_ROOT']) ###Output _____no_output_____ ###Markdown Step2 - Load your Bundle ###Code import zipline from zipline.data import bundles bundle_name = 'alpaca_api' bundle_data = bundles.load(bundle_name) ###Output _____no_output_____ ###Markdown Step3 - Create The Data Portal ###Code from zipline.pipeline.loaders import USEquityPricingLoader from zipline.utils.calendars import get_calendar from zipline.pipeline.data import USEquityPricing from zipline.data.data_portal import DataPortal import pandas as pd # Set the dataloader pricing_loader = USEquityPricingLoader.without_fx(bundle_data.equity_daily_bar_reader, bundle_data.adjustment_reader) # Define the function for the get_loader parameter def choose_loader(column): if column not in USEquityPricing.columns: raise Exception('Column not in USEquityPricing') return pricing_loader # Set the trading calendar trading_calendar = get_calendar('NYSE') start_date = pd.Timestamp('2019-07-05', tz='utc') end_date = pd.Timestamp('2020-11-13', tz='utc') # Create a data portal data_portal = DataPortal(bundle_data.asset_finder, trading_calendar = trading_calendar, first_trading_day = start_date, equity_daily_reader = bundle_data.equity_daily_bar_reader, adjustment_reader = bundle_data.adjustment_reader) ###Output _____no_output_____ ###Markdown Let's Get Some Historical Data ###Code equity = bundle_data.asset_finder.lookup_symbol("ACES", end_date) data_portal.get_history_window(assets=[equity], end_dt=end_date, bar_count=10, frequency='1d', field='close', data_frequency='daily') ###Output _____no_output_____ ###Markdown Step1 - Set the ZIPLINE_ROOT ###Code import os os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '.zipline') os.listdir(os.environ['ZIPLINE_ROOT']) ###Output _____no_output_____ ###Markdown Step2 - Load your Bundle ###Code import zipline from zipline.data import bundles bundle_name = 'alpaca_api' bundle_data = bundles.load(bundle_name) ###Output _____no_output_____ ###Markdown Step3 - Create The Data Portal ###Code from zipline.pipeline.loaders import USEquityPricingLoader from zipline.utils.calendars import get_calendar from zipline.pipeline.data import USEquityPricing from zipline.data.data_portal import DataPortal import pandas as pd # Set the dataloader pricing_loader = USEquityPricingLoader.without_fx(bundle_data.equity_daily_bar_reader, bundle_data.adjustment_reader) # Define the function for the get_loader parameter def choose_loader(column): if column not in USEquityPricing.columns: raise Exception('Column not in USEquityPricing') return pricing_loader # Set the trading calendar trading_calendar = get_calendar('NYSE') start_date = pd.Timestamp('2019-07-05', tz='utc') end_date = pd.Timestamp('2020-11-13', tz='utc') # Create a data portal data_portal = DataPortal(bundle_data.asset_finder, trading_calendar = trading_calendar, first_trading_day = start_date, equity_daily_reader = bundle_data.equity_daily_bar_reader, adjustment_reader = bundle_data.adjustment_reader) ###Output _____no_output_____ ###Markdown Let's Get Some Historical Data ###Code equity = bundle_data.asset_finder.lookup_symbol("ACES", end_date) data_portal.get_history_window(assets=[equity], end_dt=end_date, bar_count=10, frequency='1d', field='close', data_frequency='daily') ###Output _____no_output_____
docs/features/trajectories/trajectories.ipynb
###Markdown Organizing Phases into TrajectoriesThe majority of real-world use cases of optimal control involve complex trajectories that cannot be modeled with a single phase.For instance, different phases of a trajectory may have different equations of motion, different control parameterizations, or different path constraints.Phases are also necessary if the user wishes to impose intermediate constraints upon some variable, by imposing them as boundary constraints at a phase junction.The *Trajectory* class in Dymos is intended to simplify the development of multi-phase problems.It serves as a Group which contains the various phases belonging to the trajectory, and it provides linkage constraints that dictate how phases are linked together.This enables trajectories that are not only a sequence of phases in time, but may include branching behavior, allowing us to do things like track/constrain the path of a jettisoned rocket stage.It supports a `get_values` method similar to that of Phases that allows the user to retrieve the value of a variable within the trajectory.When verifying an answer with explicit simulation, the `simulate` method of Trajectory can simulate all of its member phases in parallel, providing a significant performance improvement for some cases. Instantiating a TrajectoryInstantiating a Trajectory is simple. Simply invoke `Trajectory()`. The trajectory objectitself is an OpenMDAO `Group` which serves as a container for its constituent Phases.- phases An OpenMDAO `Group` or `ParallelGroup` holding the member phases- linkages A Dymos `PhaseLinkageComp` that manages all of the linkage constraints that dictate how the phases are connected. Adding PhasesPhases are added to a Trajectory using the `add_phase` method.```{eval-rst} .. automethod:: dymos.Trajectory.add_phase :noindex:``` Defining Phase LinkagesHaving added phases to the Trajectory, they now exist as independent Groups within the OpenMDAO model.In order to enforce continuity among certain variables across phases, the user must declare which variables are to be continuous in value at each phase boundary.There are two methods in dymos which provide this functionality.The `add_linkage_constraint` method provides a very general way of coupling two phases together.It does so by generating a constraint of the following form:\begin{align} c = \mathrm{sign}_a \mathrm{var}_a + \mathrm{sign}_b \mathrm{var}_b\end{align}Method `add_linkage_constraint` lets the user specify the variables and phases to be compared for this constraint, as well as the location of the variable in each phase (either 'initial' or 'final')By default this method is setup to provide continuity in a variable between two phases:- the sign of variable `a` is +1 while the sign of variable `b` is -1.- the location of variable `a` is 'final' while the location of variable `b` is 'initial'.- the default value of the constrained quantity is 0.0.In this way, the default behavior constrains the final value of some variable in phase `a` to be the same as the initial value of some variable in phase `b`.Other values for these options can provide other functionality.For instance, to simulate a mass jettison, we could require that the initial value of `mass` in phase `b` be 1000 kg less than the value of mass at the end of phase `a`.Providing arguments `equals = 1000, units='kg` would achieve this.Similarly, specifying other values for the locations of the variables in each phase can be used to ensure that two phases start or end at the same condition - such as the case in a branching trajectory or a rendezvous.While `add_linkage_constraint` gives the user a powerful capability, providing simple state and time continuity across multiple phases would be a very verbose undertaking using this method.The `link_phases` method is intended to simplify this process.In the finite-burn orbit raising example, there are three phases: `burn1`, `coast`, `burn2`.This case is somewhat unusual in that the thrust acceleration is modeled as a state variable. The acceleration needs to be zero in the coast phase, but continuous between `burn1` and `burn2`, assuming no mass was jettisoned during the coast and that the thrust magnitude doesn't change. add_linkage_constraint```{eval-rst} .. automethod:: dymos.Trajectory.add_linkage_constraint :noindex:``` link_phases```{eval-rst} .. automethod:: dymos.Trajectory.link_phases :noindex:``` Examples of using the `link_phases` method**Typical Phase Linkage Sequence**A typical phase linkage sequence, where all phases use the same ODE (and therefore havethe same states), are simply linked sequentially in time. ###Code t.link_phases(['phase1', 'phase2', 'phase3']) ###Output _____no_output_____ ###Markdown **Adding an Additional Linkage**If the user wants some control variable, `u`, to be continuous in value between `phase2` and`phase3` only, they could indicate that with the following code: ###Code t.link_phases(['phase2', 'phase3'], vars=['u']) ###Output _____no_output_____ ###Markdown **Branching Trajectories**For a more complex example, consider the case where there are two phases which branch offfrom the same point, such as the case of a jettisoned stage. The nominal trajectoryconsists of the phase sequence `['a', 'b', 'c']`. Let phase `['d']` be the phase that tracksthe jettisoned component to its impact with the ground. The linkages in this casewould be defined as: ###Code t.link_phases(['a', 'b', 'c']) t.link_phases(['b', 'd']) ###Output _____no_output_____ ###Markdown **Specifying Linkage Locations**Phase linkages assume that, for each pair, the state/control values at the end (`'final'`)of the first phase are linked to the state/control values at the start of the second phase(`'initial'`).The user can override this behavior, but they must specify a pair of location strings foreach pair given in `phases`. For instance, in the following example phases `a` and `b`have the same initial time and state, but phase `c` follows phase `b`. Note since thereare three phases provided, there are two linkages and thus two pairs of locationspecifiers given. ###Code t.link_phases(['a', 'b', 'c'], locs=[('initial', 'initial'), ('final', 'initial')]) ###Output _____no_output_____
code/T9 - 1 - K Nearest Neighbors.ipynb
###Markdown K Nearest Neighbors ###Code import pandas as pd import numpy as np from sklearn import preprocessing, model_selection, neighbors df = pd.read_csv("../datasets/cancer/breast-cancer-wisconsin.data.txt",header=None) df.head() df.describe() df.columns = ["name","V1","V2","V3","V4","V5","V6","V7","V8","V9","class"] df.head() df = df.drop(["name"],1) df.replace("?",-99999,inplace=True) df.head() Y = df["class"] X = df[["V1","V2","V3","V4","V5","V6","V7","V8","V9"]] X.head() Y.head() ###Output _____no_output_____ ###Markdown Clasificador de los K vecinos ###Code X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X,Y,test_size=0.2) clf = neighbors.KNeighborsClassifier() clf.fit(X_train,Y_train) accuracy = clf.score(X_test,Y_test) accuracy ###Output _____no_output_____ ###Markdown Clasificación sin limpieza ###Code df = pd.read_csv("../datasets/cancer/breast-cancer-wisconsin.data.txt",header=None) df.replace("?",-99999,inplace=True) df.columns = ["name","V1","V2","V3","V4","V5","V6","V7","V8","V9","class"] Y = df["class"] X = df[["name","V1","V2","V3","V4","V5","V6","V7","V8","V9"]] X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X,Y,test_size=0.2) clf = neighbors.KNeighborsClassifier() clf.fit(X_train,Y_train) accuracy = clf.score(X_test,Y_test) accuracy ###Output _____no_output_____ ###Markdown Clasificar nuevos datos ###Code Y = df["class"] X = df[["V1","V2","V3","V4","V5","V6","V7","V8","V9"]] X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X,Y,test_size=0.2) clf = neighbors.KNeighborsClassifier() clf.fit(X_train,Y_train) sample_measure = np.array([4,2,1,1,1,2,3,2,1]) sample_measure = sample_measure.reshape(1,-1) predict = clf.predict(sample_measure) predict sample_measure2 = np.array([[4,2,1,1,1,2,3,2,1],[4,2,1,1,1,2,3,2,1]]) predict = clf.predict(sample_measure2) predict ###Output _____no_output_____
3/3.2.1_inheritance.ipynb
###Markdown Наследование в PythonНаследование в Python* Наследование классов* Множественное наследование* Вызов `super()`* `name mangling`* Композиция vs наследованиеЗачем нужно наследование классов?* Изменение поведения класса* Расширение функционала класса ###Code # Класс домашнего питомца class Pet(): def __init__(self, name=None): self.name = name # Класс собак class Dog(Pet): # В скобках указан родительский класс, его еще называют базовый или супер класс def __init__(self, name, breed=None): super().__init__(name) # Вызов инициализации родительского класса self.breed = breed def say(self): return '{}: waw!'.format(self.name) dog = Dog('Шарик', 'Доберман') print(dog.name) # Шарик print(dog.breed) # Доберман print(dog.say()) # Шарик: waw! ###Output Шарик Доберман Шарик: waw! ###Markdown Множественное наследование ###Code import json class ExportJSON(): def to_json(self): return json.dumps({ 'name': self.name, 'breed': self.breed }) class ExDog(Dog, ExportJSON): # Множественное наследование pass # Создаем экземпляр класса ExDog dog = ExDog('Белка', breed='Дворняжка') # Просмотр всех значений атрибутов экземпляра класса print(dog.__dict__) print(dog.to_json()) ###Output {'name': 'Белка', 'breed': 'Дворняжка'} {"name": "\u0411\u0435\u043b\u043a\u0430", "breed": "\u0414\u0432\u043e\u0440\u043d\u044f\u0436\u043a\u0430"} ###Markdown Наследование от `object` (`issubclass`)Любой класс является наследником класса `object`: ###Code issubclass(int, object) issubclass(Dog, object) issubclass(Dog, Pet) issubclass(Dog, int) ###Output _____no_output_____ ###Markdown Объект является экземпляром класса? (`isinstance`) ###Code isinstance(dog, Dog) isinstance(dog, Pet) isinstance(dog, object) ###Output _____no_output_____ ###Markdown Поиск атрибутов и методов объекта. Линеаризация классаMRO - **M**ethod **R**esolution **O**rder. Показывает иерархию классов и последовательность поиска атрибутов и методов объекта. Например, в начале поиск атрибутов (включая методы) будет в классе `ExDog`, затем в `Dog`, `Pet`, `ExportJSON` и наконец в `object`: ###Code ExDog.__mro__ ###Output _____no_output_____ ###Markdown Использование `super()` ###Code class ExDog(Dog, ExportJSON): def __init__(self, name, breed=None): # Вызов метода super() без параметров. Метод __init__ ищется по MRO super().__init__(name) class WoolenDog(Dog, ExportJSON): def __init__(self, name, breed=None): # Вызов метода super() с параметрами. Явное указание метода __init__ конкретного класса super(Dog, self).__init__(name) self.breed = 'Шерстяная собака породы {}'.format(breed) dog = WoolenDog('Жучка', 'Такса') print(dog.breed) ###Output Шерстяная собака породы Такса ###Markdown Разрешение конфликта имен, `name mangling` ###Code class Dog(Pet): def __init__(self, name, breed=None): super().__init__(name) self.__breed = breed # 2 символа подчеркивания обозначает как приватный атрибут def say(self): return '{}: wow!'.format(self.name) def get_breed(self): return self.__breed class ExDog(Dog, ExportJSON): def get_breed(self): # self.__breed - Этот атрибут недоступен, ошибка AttributeError return 'Порода: {} - {}'.format(self.name, self.__breed) dog = ExDog('Фокс', 'Мопс') print( dog.get_breed() ) # AttributeError: 'ExDog' object has no attribute '_ExDog__breed' dog.__dict__ ###Output _____no_output_____ ###Markdown Что же произошло? Метод `dog.get_breed()` попытался обратится к атрибуту `__breed (_ExDog__breed)` объекта класса `ExDog`, но такого атрибута не нашел. В словаре экземпляра класса `ExDog` мы видим атрибут `__breed (_Dog__breed)`, который пренадлежит объекту класса `Dog`. Python позволяет обращаться к приватным атрибутам класса, поэтому мы можем исправить код нашего класса, нолучше этим не увлекаться: ###Code class ExDog(Dog, ExportJSON): def get_breed(self): # Исправление с self.__breed на self._Dog__breed return 'Порода: {} - {}'.format(self.name, self._Dog__breed) dog = ExDog('Фокс', 'Мопс') print( dog.get_breed() ) ###Output Порода: Фокс - Мопс
python/en/archive/topics/temp/audio/voice_activity_detection/jupyter_notebooks/VAD_in_Python-Part2.ipynb
###Markdown VAD (Voice Activity Detection) in Python-Part 2 4. Testing WebRTC VAD 4.1. Installation 4.1.1. Install Python Interface to WebRTC VAD [README.rst](https://github.com/wiseman/py-webrtcvad/blob/master/README.rst) for [Python interface to the WebRTC Voice Activity Detector](https://github.com/wiseman/py-webrtcvad/) explains the installation process.```bash(hula) ~/$ pip install webrtcvad...Successfully installed webrtcvad-2.0.10(hula) ~/$``` 4.1.2. Verify the Installation```bash(hula) ~/$ pythonPython 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] :: Anaconda, Inc. on linuxType "help", "copyright", "credits" or "license" for more information.>>> import webrtcvad>>> exit()``` 4.2. Test Code: A Frame with Zero Returns FalseThe last line of code in "How to use it > (Step 3)" of [README.rst](https://github.com/wiseman/py-webrtcvad/blob/master/README.rst) fails to run in Python3. ```pythonprint 'Contains speech: %s' % (vad.is_speech(frame, sample_rate) File "", line 22 print 'Contains speech: %s' % (vad.is_speech(frame, sample_rate) ^SyntaxError: invalid syntax```To fix the error, change the line to:```python Rightprint( 'Contains speech: %s'% (vad.is_speech(frame, sample_rate)) ) Wrongprint 'Contains speech: %s' % (vad.is_speech(frame, sample_rate)```The following code is my modified version for better readability. What the code does is to create a test frame test_frame with all zeros. b'\x00' is a byte string 0 while b'\x01' is 1. The VAD result is False because this frame filled with zeros is not a speech. ###Code import webrtcvad vad = webrtcvad.Vad() # 0~3, 0 is the least aggressive; # 3 is the most aggressive in filtering out non-speech frame. vad.set_mode(1) # Run the VAD on 10 ms of silence. The result should be False. sample_rate = 16000 # Hz frame_duration = 10 # ms # The following lines are modified for better readability. frame_duration_in_sec = frame_duration / 1000 n_samples_per_frame = int( frame_duration_in_sec * sample_rate ) print(f'frame_duration_in_sec = {frame_duration_in_sec}' ) print(f'n_samples_per_frame = {n_samples_per_frame}' ) test_frame = b'\x00\x00' * n_samples_per_frame test_result = vad.is_speech( test_frame, sample_rate ) print(f'test_frame = {test_frame}' ) print( 'Contains speech: %s'% (test_result) ) ###Output frame_duration_in_sec = 0.01 n_samples_per_frame = 160 test_frame = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' Contains speech: False ###Markdown 4.3. Code Examples for WebRTC VADThe previous test code is too simple. More code examples are below.1. [Voice activity detection example](https://www.kaggle.com/holzner/voice-activity-detection-example) at [kaggle](https://www.kaggle.com/)2. [vad.py](https://github.com/wangshub/python-vad/blob/master/vad.py) at [wangshub/python-vad](https://github.com/wangshub/python-vad)3. [example.py](https://github.com/wiseman/py-webrtcvad/blob/master/example.pyL148) at [py-webrtcvad](https://github.com/wiseman/py-webrtcvad)These examples are simple, but useful to figure out how to use WebRTC VAD in Python. 4.3.1. VAD Example at Kaggle[Voice activity detection example](https://www.kaggle.com/holzner/voice-activity-detection-example) explains the following code that reads in a .wav file, partitions the samples into frames by sliding a frame-sized window, and feeds each frame to the vad.is_speech function in order to determins weather the frame is a speech or not.```pythonimport osimport numpy as np%matplotlib inlineimport matplotlib.pyplot as pltfrom scipy.io import wavfileimport webrtcvadimport structtrain_audio_path = "../input/train/audio"filename = 'yes/0a7c2a8d_nohash_0.wav'sample_rate, samples = wavfile.read(os.path.join(train_audio_path, filename))vad = webrtcvad.Vad()vad.set_mode(3)window_duration = 0.03 duration in secondsraw_samples = struct.pack("%dh" % len(samples), *samples)samples_per_window = int(window_duration * sample_rate + 0.5)bytes_per_sample = 2segments = []for start in np.arange(0, len(samples), samples_per_window): stop = min(start + samples_per_window, len(samples)) is_speech = vad.is_speech(raw_samples[start * bytes_per_sample: stop * bytes_per_sample], sample_rate = sample_rate) segments.append(dict( start = start, stop = stop, is_speech = is_speech)) Plot the input wav fileplt.figure(figsize = (10,7))plt.plot(samples)ymax = max(samples) plot segment identifed as speechfor segment in segments: if segment['is_speech']: plt.plot([ segment['start'], segment['stop'] - 1], [ymax * 1.1, ymax * 1.1], color = 'orange')plt.xlabel('sample')plt.grid()speech_samples = np.concatenate([ samples[segment['start']:segment['stop']] for segment in segments if segment['is_speech']])import IPython.display as ipdipd.Audio(speech_samples, rate=sample_rate)``` ###Code import os import numpy as np %matplotlib inline import matplotlib.pyplot as plt from scipy.io import wavfile import webrtcvad import struct train_audio_path = "." # This line is different. filename = 'english-0.wav' # This line is different. sample_rate, samples = wavfile.read(os.path.join(train_audio_path, filename)) vad = webrtcvad.Vad() vad.set_mode(3) window_duration = 0.03 # duration in seconds raw_samples = struct.pack("%dh" % len(samples), *samples) samples_per_window = int(window_duration * sample_rate + 0.5) bytes_per_sample = 2 segments = [] for start in np.arange(0, len(samples), samples_per_window): stop = min(start + samples_per_window, len(samples)) is_speech = vad.is_speech(raw_samples[start * bytes_per_sample: stop * bytes_per_sample], sample_rate = sample_rate) segments.append(dict( start = start, stop = stop, is_speech = is_speech)) # Plot the input wav file plt.figure(figsize = (10,7)) plt.plot(samples) ymax = max(samples) # plot segment identifed as speech for segment in segments: if segment['is_speech']: plt.plot([ segment['start'], segment['stop'] - 1], [ymax * 1.1, ymax * 1.1], color = 'orange') plt.xlabel('sample') plt.grid() speech_samples = np.concatenate([ samples[segment['start']:segment['stop']] for segment in segments if segment['is_speech']]) import IPython.display as ipd ipd.Audio(speech_samples, rate=sample_rate) ###Output _____no_output_____
Take-Home Project.ipynb
###Markdown I will proceed by building various functions that will allow me to create the following new columns from the existing columns of df:• *wikipedia_url* (of the article) • *article_title* • *section_titles* (in the article) • *subsection_titles* (in the article) • *first_sentence* (of the article) • *article_length* (number of characters) ###Code #I ended up not using this function. def wikipedia_url(string): """ Returns the URL of the Wikipedia article, usually embedded in the first column. """ #The URL's are contained within '\t....\t', and re.findall() is saved as a list, hence the [0]. url = re.findall(r'\t.+?\t', string)[0] #We don't want the \t's returned, so we eliminate the first and last characters from the url. #Also, the 'https://' substring is saving the strings as URLs, truncating some URL ends, leading to erroneous pages. #Therefore I will remove these characters as well. return url[9:len(url)-1] #Add the new 'url' variable to df df['url'] = df['columns_combined'].apply(wikipedia_url) def article_title(string): """ Returns the title of the Wikipedia article, usually embedded in the first column. """ #The article titles are at the beginning of the string, before the first '\t'. Also, title is saved as a list, hence the [0]. title = re.findall(r'.+?\t', string)[0] #We don't want to return the \t, so we eliminate the last character. return title[:len(title)-1] #Add the new 'article_title' variable to df. #Usually the article title is in c0. However, it sometimes spills over to c1 and c2 as well. df['article_title'] = (df['c0'] + df['c1'] + df['c2']).apply(article_title) def article_section_titles(string): """ Input: the full article text, found in df['columns_combined']. Output: a list of section titles for the input Wikipedia page. Typically, the section titles of an article appear within 4 equal signs, with a space before the first, a space after the second and no space before the third. e.g. ' == Types of artificial intelligence==' """ #Some of the pages have no section titles and need to be treated differently for string methods to work. #We will distinguish these 2 cases by the number of occurrences of substrings of the form ' == ...=='. section_titles = re.findall(r' == .+?==', string) if len(section_titles) > 0: #The = signs will be removed, as well as the 2 spaces at the beginning of each section title. section_titles = list(pd.Series(section_titles)\ .str.replace('=','')\ .str.lstrip(' ')) return section_titles if len(section_titles) == 0: #These rows will return an error if we try to apply the string methods from the above case, so we treat them separately. return [] #Add the new 'section_titles' variable to df. df['section_titles'] = df['columns_combined'].apply(article_section_titles) def article_subsection_titles(string): """ Input: the full article text, found in df['columns_combined']. Output: a list of subsection titles for the input Wikipedia page. Typically, the section titles of an article appear within 6 equal signs, with a space before the first, a space after the third and no space before the fourth. e.g. ' === Metric===' """ #Some of the pages have no subsection titles and need to be treated differently for string methods to work. #We will distinguish these 2 cases by the number of occurrences of substrings of the form ' === ...==='. subsection_titles = re.findall(r' === .+?===', string) if len(subsection_titles) > 0: #The = signs will be removed, as well as the 2 spaces at the beginning of each subsection title. subsection_titles = list(pd.Series(subsection_titles)\ .str.replace('=','')\ .str.lstrip(' ')) return subsection_titles if len(subsection_titles) == 0: #These rows will return an error if we try to apply the string methods from the above case, so we treat them separately. return [] #Add the new 'subsection_titles' variable to df. df['subsection_titles'] = df['columns_combined'].apply(article_subsection_titles) def article_first_sentence(string): """ Returns the first sentence of the Wikipedia article, identified by the first period after the URL. Note that the commas from the Wikipedia article will not appear. """ #For some articles there is no sentence in the dataset, and these articles need to be processed separately. #For these articles there will be at most 1 substring of the form '\t.+?\.', namely '\thttps://en.'from the URL. substring = re.findall(r'\t.+?\.', string) if len(substring) > 1: #Ignore the first matched string, '\thttps://en.', and skip to the second. string_starting_with_second_tab = substring[1] #Sometimes the second \t is followed by ", and then the first sentence begins. if string_starting_with_second_tab[1] == '"': #Return the string without the \t". return string_starting_with_second_tab[2:] #Other times it begins immediately after the second \t, without ". else: #Return the string without the \t. return string_starting_with_second_tab[1:] if len(substring) <= 1: #There is no sentence in the article. return '' #Add the new 'first_sentence' variable to df. df['first_sentence'] = df['columns_combined'].apply(article_first_sentence) #I ended up not using this function. def article_length(string): """ This returns the number of characters in the dataset for the input article. """ return len(string) #Add the new 'article_length' variable to df. df['article_length'] = df['columns_combined'].apply(article_length) df.iloc[:3, 3392:] def preprocess(column): """ This function preprocesses a column of strings (e.g. 'article_title') or a column of lists of strings (e.g. 'section_titles'). The input is a column of df written as a string (e.g. 'article_title' or 'section_titles'). Preprocessing consists of removing punctuation, converting to lower-case and removing stop words. The output is a series of all of the words that occur among all the rows of the input column, where each entry is a single word. """ #The entries of 'section_titles' and 'subsection_titles' are lists of strings. These columns need to be converted #to a single list for the following preprocessing steps to work. if column in ['section_titles', 'subsection_titles']: #Combine the lists into a single list. L = [] for i in range(df.shape[0]): L += df.loc[i, column] #Combine the list entries (strings) into a single string string = '' for i in range(len(L)): string += ' ' + L[i] #The entries of 'article_title', 'first_sentence' and 'columns_combined' are strings. else: #Combine the strings into a single string. string = '' for i in range(df.shape[0]): string += ' ' + df.loc[i, column] #Tokenize string into words and remove punctuation. word_list = nltk.RegexpTokenizer(r'\w+')\ .tokenize(string) #Convert words to lower-case. word_list = [word.lower() for word in word_list] #Remove stop words. #These are default stop words. stopwords = set(nltk.corpus.stopwords.words('english')) #These are additional stop words I have chosen by looking through the most common words in 'section_titles'. extra_stop_words = ['see', 'references', 'also', 'links', 'external', 'history', 'reading', 'notes', 'examples', 'definition', 'overview', 'example', 'related', 'bibliography', 'use', 'users', 'legal', 'two'] for word in extra_stop_words: stopwords.add(word) #The removal. word_list = [word for word in word_list if word not in stopwords] #Convert to a series so that we can apply Pandas methods to the output. return pd.Series(word_list) def concatenated_ngrams(preprocessed_column, n): """ This function takes a string as an input, and is intended specifically to take an output from preprocess() as its input. It returns the ngrams of a column for n = 2 or 3, a series of strings where each string consists of n words. """ if n == 2: #Create the bigrams. ngrams = list(nltk.ngrams(preprocessed_column, 2)) #ngrams is a list of 2-tuples. Combine each pair of elements into a string. L = [] for w1,w2 in ngrams: L.append(w1 + ' ' + w2) #Convert to a series. return pd.Series(L) if n == 3: #Create the 3-grams. ngrams = list(nltk.ngrams(preprocessed_column, 3)) #ngrams is a list of 3-tuples. Combine each triplet of elements into a string. L = [] for w1,w2,w3 in ngrams: L.append(w1 + ' ' + w2 + ' ' + w3) #Convert to a series. return pd.Series(L) ###Output _____no_output_____ ###Markdown **ARTICLE TITLES** ###Code #Preprocessing the article_title column. article_title_preprocessed = preprocess('article_title') #Counting and sorting the most common words among all the rows in article_title. titles_words_tallied = article_title_preprocessed.value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common bigrams among all the rows in article_title. titles_2grams_tallied = concatenated_ngrams(article_title_preprocessed, 2).value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common trigrams among all the rows in article_title. titles_3grams_tallied = concatenated_ngrams(article_title_preprocessed, 3).value_counts()\ .sort_values(ascending=False) #An example showing how an output of preprocess() looks. article_title_preprocessed[:3] #An example showing how a ..._words_tallied object looks. titles_words_tallied[:3] #An example showing how a ..._2grams_tallied object looks. titles_2grams_tallied[:3] #An example showing how a ..._3grams_tallied object looks. titles_3grams_tallied[:3] ###Output _____no_output_____ ###Markdown **SECTION TITLES** ###Code #Preprocessing the section_titles column. section_titles_preprocessed = preprocess('section_titles') #Counting and sorting the most common words among all the rows in section_titles. section_titles_words_tallied = section_titles_preprocessed.value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common bigrams among all the rows in section_titles. section_titles_2grams_tallied = concatenated_ngrams(section_titles_preprocessed, 2).value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common trigrams among all the rows in section_titles. section_titles_3grams_tallied = concatenated_ngrams(section_titles_preprocessed, 3).value_counts()\ .sort_values(ascending=False) ###Output _____no_output_____ ###Markdown **SUBSECTION TITLES** ###Code #Preprocessing the subsection_titles column. subsection_titles_preprocessed = preprocess('subsection_titles') #Counting and sorting the most common words among all the rows in subsection_titles. subsection_titles_words_tallied = subsection_titles_preprocessed.value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common bigrams among all the rows in subsection_titles. subsection_titles_2grams_tallied = concatenated_ngrams(subsection_titles_preprocessed, 2).value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common trigrams among all the rows in subsection_titles. subsection_titles_3grams_tallied = concatenated_ngrams(subsection_titles_preprocessed, 3).value_counts()\ .sort_values(ascending=False) ###Output _____no_output_____ ###Markdown **FIRST SENTENCES** ###Code #Preprocessing the first_sentence column. first_sentences_preprocessed = preprocess('first_sentence') #Counting and sorting the most common words among all the rows in first_sentence. first_sentences_words_tallied = preprocess('first_sentence').value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common bigrams among all the rows in first_sentence. first_sentence_2grams_tallied = concatenated_ngrams(first_sentences_preprocessed, 2).value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common trigrams among all the rows in first_sentence. first_sentence_3grams_tallied = concatenated_ngrams(first_sentences_preprocessed, 3).value_counts()\ .sort_values(ascending=False) ###Output _____no_output_____ ###Markdown **COLUMNS COMBINED** ###Code #Preprocessing the columns_combined column. columns_combined_preprocessed = preprocess('columns_combined') #Counting and sorting the most common words among all the rows in columns_combined. columns_combined_words_tallied = columns_combined_preprocessed.value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common bigrams among all the rows in columns_combined. columns_combined_2grams_tallied = concatenated_ngrams(columns_combined_preprocessed, 2).value_counts()\ .sort_values(ascending=False) #Counting and sorting the most common trigrams among all the rows in columns_combined. columns_combined_3grams_tallied = concatenated_ngrams(columns_combined_preprocessed, 3).value_counts()\ .sort_values(ascending=False) ###Output _____no_output_____
evaluations/sars-cov-2/3-index-genomes.case-10000-batch-10000.ipynb
###Markdown 1. Parameters ###Code # Defaults cases_dir = 'cases/unset' reference_file = 'references/NC_045512.gbk.gz' input_files_all = 'input/input-files.tsv' iterations = 3 mincov = 10 ncores = 32 number_samples = 10 build_tree = False sample_batch_size=2000 # Parameters cases_dir = "cases/case-10000-batch-10000" iterations = 3 number_samples = 10000 sample_batch_size = 10000 build_tree = False from pathlib import Path from shutil import rmtree from os import makedirs import imp fp, pathname, description = imp.find_module('gdi_benchmark', ['../../lib']) gdi_benchmark = imp.load_module('gdi_benchmark', fp, pathname, description) cases_dir_path = Path(cases_dir) if cases_dir_path.exists(): rmtree(cases_dir_path) if not cases_dir_path.exists(): makedirs(cases_dir_path) input_files_all = Path(input_files_all) reference_file = Path(reference_file) case_name = str(cases_dir_path.name) reference_name = reference_file.name.split('.')[0] cases_input = cases_dir_path / 'input-files-case.tsv' index_path = cases_dir_path / 'index' benchmark_path = cases_dir_path / 'index-info.tsv' output_tree = cases_dir_path / 'tree.tre' ###Output _____no_output_____ ###Markdown 2. Create subset input ###Code import pandas as pd all_input_df = pd.read_csv(input_files_all, sep='\t') all_input_total = len(all_input_df) subset_input_df = all_input_df.head(number_samples) subset_input_total = len(subset_input_df) subset_input_df.to_csv(cases_input, sep='\t', index=False) print(f'Wrote {subset_input_total}/{all_input_total} samples to {cases_input}') ###Output Wrote 10000/100000 samples to cases/case-10000-batch-10000/input-files-case.tsv ###Markdown 2. Index genomes ###Code !gdi --version ###Output gdi, version 0.4.0.dev1 ###Markdown 2.1. Index reads ###Code results_handler = gdi_benchmark.BenchmarkResultsHandler(name=case_name) benchmarker = gdi_benchmark.IndexBenchmarker(benchmark_results_handler=results_handler, index_path=index_path, input_files_file=cases_input, reference_file=reference_file, mincov=mincov, build_tree=build_tree, ncores=ncores, sample_batch_size=sample_batch_size) benchmark_df = benchmarker.benchmark(iterations=iterations) benchmark_df benchmark_df.to_csv(benchmark_path, sep='\t', index=False) ###Output _____no_output_____ ###Markdown 3. Export trees ###Code if build_tree: !gdi --project-dir {index_path} export tree {reference_name} > {output_tree} print(f'Wrote tree to {output_tree}') else: print(f'build_tree={build_tree} so no tree to export') ###Output build_tree=False so no tree to export
QuantTrading/time-series-analyze_1-pandas.ipynb
###Markdown Analýza časových řad 1 - manipulace s daty v PandasPopis základních funkcí pomocí pro analýzu dat v Pandas. Info o verzi a notebooku ###Code import datetime MY_VERSION = 1,0 print('Verze notebooku:', '.'.join(map(str, MY_VERSION))) print('Poslední aktualizace:', datetime.datetime.now()) ###Output Verze notebooku: 1.0 Poslední aktualizace: 2017-07-11 09:59:32.528510 ###Markdown Informace o použitých python modulech ###Code import sys import datetime import pandas as pd import pandas_datareader as pdr import pandas_datareader.data as pdr_web import quandl as ql # Load Quandl API key import json with open('quandl_key.json','r') as f: quandl_api_key = json.load(f) ql.ApiConfig.api_key = quandl_api_key['API-key'] print('Verze pythonu:') print(sys.version) print('---') print('Pandas:', pd.__version__) print('pandas-datareader:', pdr.__version__) print('Quandl version:', ql.version.VERSION) ###Output Verze pythonu: 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1900 64 bit (AMD64)] --- Pandas: 0.20.2 pandas-datareader: 0.4.0 Quandl version: 3.1.0 ###Markdown Seznam zdrojů:1. [Pandas - manipulace a analýza dat](https://pandas.pydata.org/)+ [pandas-datareader](https://github.com/pydata/pandas-datareader)+ [Seznam všech webových zdrojů v pandas-datareader](https://pandas-datareader.readthedocs.io/en/latest/remote_data.html)+ [Python For Finance: Algorithmic Trading](https://www.datacamp.com/community/tutorials/finance-python-trading)+ [Quandl](https://www.quandl.com/)+ [ETF trhy - finančník](http://www.financnik.cz/komodity/financnik/trhy-podrobneji-etfs.html) [1]: https://sourceforge.net/p/jupiter/wiki/markdown_syntax/ `Series` a `DataFrame`Knihovna `pandas` používá k uchovávání a zpracování dat své typy **`Series`** a **`DataFrame`**. V případě **`Series`** se jedná o 1D označená (labeled) struktura dat jednoho typu. **`DataFrame`** je pak 2D označená (labeled) struktura dat různých typů. Jednotlivé sloupce v `DataFrame` jsou typu `Series`. Další informace v dokumentaci [DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) a [Series](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html). Data k analýze ###Code start_date = datetime.datetime(2015, 1, 1) end_date = datetime.datetime.now() ES = ql.get("CHRIS/CME_ES1", start_date=start_date, end_date=end_date) ES.head() SPY = pdr_web.DataReader("NYSEARCA:SPY", 'google', start=start_date, end=end_date) SPY.head() ###Output _____no_output_____ ###Markdown Základní práce s daty Zobrazení prvních `n` záznamů z `DataFrame`. ###Code n = 10 #ES.head() ES.head(n) ###Output _____no_output_____ ###Markdown Zobrazení posledních n záznamů z `DataFrame`. ###Code n = 10 #ES.tail() ES.tail(n) ###Output _____no_output_____ ###Markdown Zobrazeních několik statistických informací ke každému sloupci v `DataFrame`. ###Code ES.describe() ###Output _____no_output_____ ###Markdown Uložení dat v `DataFrame` do `.csv` souboru ###Code ES.to_csv('data/es.csv') ###Output _____no_output_____ ###Markdown Načtení dat z `.csv` souboru ###Code #data = pd.read_csv('data/es.csv') data = pd.read_csv('data/es.csv', header=0, index_col='Date', parse_dates=True) data.head(3) ###Output _____no_output_____ ###Markdown Informace o indexu a sloupcích daného `DataFrame` ###Code data.index data.columns ###Output _____no_output_____ ###Markdown Výběr určitých dat z `DataFrame` Indexace Základní výběr dat z `DataFrame` lze dělat pomocí indexace. ###Code # výběr posledních 10 záznamů ze sloupce Last, výsledek je typu Series vyber = data['Last'][-10:] vyber ###Output _____no_output_____ ###Markdown Výběr podle popisu *(label-based)* a pozice *(positional)*Pro získání dat podle popisu `pandas` používá funkce **`loc`**. Např. `2017`, nebo `2016-11-01` zadáme jako argument: ###Code data.loc['2016-11-01'] vyber = data.loc['2017'] print(vyber.head(5)) print(vyber.tail(5)) ###Output Open High Low Last Change Settle Volume \ Date 2017-01-03 2240.75 2259.50 2239.50 2252.50 16.25 2252.50 1787898.0 2017-01-04 2252.75 2267.25 2251.00 2264.50 11.75 2264.25 1385650.0 2017-01-05 2264.50 2266.00 2254.00 2265.00 NaN 2264.25 1312627.0 2017-01-06 2264.25 2277.00 2258.25 2270.75 7.25 2271.50 1542214.0 2017-01-09 2271.25 2275.25 2263.50 2264.25 6.50 2265.00 1019957.0 Previous Day Open Interest Date 2017-01-03 2787056.0 2017-01-04 2799661.0 2017-01-05 2804829.0 2017-01-06 2807328.0 2017-01-09 2815455.0 Open High Low Last Change Settle Volume \ Date 2017-07-03 2422.00 2436.50 2421.50 2423.75 4.0 2425.0 750433.0 2017-07-05 2423.75 2432.25 2419.25 2428.50 3.0 2428.0 1276362.0 2017-07-06 2428.00 2430.50 2405.25 2408.50 19.5 2408.5 1590195.0 2017-07-07 2409.25 2425.00 2407.50 2423.25 14.0 2422.5 1249140.0 2017-07-10 2423.00 2430.00 2419.25 2423.75 2.0 2424.5 849888.0 Previous Day Open Interest Date 2017-07-03 2839439.0 2017-07-05 2829039.0 2017-07-06 2829632.0 2017-07-07 2824775.0 2017-07-10 2820247.0 ###Markdown Pro získání dat podle pozice `pandas` používá funkce **`iloc`**. Např. `20`, nebo `43` zadáme jako argument: ###Code # zobrazí řádek 20 print(data.iloc[20]) # zobrazí řádky 0,1,2,3,4 a sloupce 0,1,2,3 data.iloc[[0,1,2,3,4], [0,1,2,3]] ###Output Open 1990.50 High 2018.50 Low 1973.75 Last 2015.25 Change 28.50 Settle 2017.00 Volume 2035431.00 Previous Day Open Interest 2740382.00 Name: 2015-02-02 00:00:00, dtype: float64 ###Markdown Více v podrobné dokumentaci [Indexing and Selecting Data](https://pandas.pydata.org/pandas-docs/stable/indexing.html). Úprava datového vzorku časové řady Náhodný vzorek datVzorek náhodných dat lze získat pomocí funkce **`sample`**. [Dokumentace k DataFrame.sample](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sample.html). ###Code # Vzorek 20 řádků sample = data.sample(20) sample ###Output _____no_output_____ ###Markdown Získání měsíčního vzorku dat z denníhoFunkce **`resample`** umožňuje flexibilní konverzi frekvence dat jako funkce **`asfreq`**, ale i další. Více v [dokumentaci k resample](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html) a [dokumentaci asfreq](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.asfreq.html). ###Code prumer = data.resample('M').mean() prumer.head() mesicni = data.asfreq("M", method="bfill") mesicni.head() ###Output _____no_output_____ ###Markdown Vypočítání volatility EOD datSe sloupci `DataFramu` můžu bezproblému aritmeticky počítat. Pro získání volatility jednotlivých denních záznamů, odečtu jednoduše sloupec `low` od sloupce `high` a výsledek vložím do sloupce `ATR`. ###Code data['ATR_1'] = data.High - data.Low data.head() ###Output _____no_output_____ ###Markdown Smazání sloupceSmazat sloupce lze pomocí klíčového slova `del`. ###Code del data['ATR_1'] data.head() ###Output _____no_output_____
Guidedlda.ipynb
###Markdown Preprocessing ###Code abr1 = abr1[['Abr','word']] abr1['Abr'] = abr1['Abr'].apply(lambda x : x.lower()) abr1['word'] = abr1['word'].apply(lambda x : x.lower()) trans_abr = abr1.set_index("Abr").T abrd = trans_abr.to_dict('list') def replace_abr(narr): # find all states that exist in the string abr_found = filter(lambda abr: abr in narr, abrd.keys()) # replace each state with its abbreviation for abr in abr_found: narr = narr.replace(' '+abr+' ', ' '+abrd[abr][0]+' ') # return the modified string (or original if no states were found) return narr asrs['Narrative'] = asrs['Narrative'].apply(lambda x : x.lower()) asrs["Narrative"] = asrs['Narrative'].str.replace('[^\w\s\d]','') asrs["Narrative"] = asrs['Narrative'].str.replace(r'\d+','') asrs['Narrative'] = asrs['Narrative'].apply(replace_abr) lemmatizer = WordNetLemmatizer() stop_words = set(stopwords.words('english')) clean = lambda new : " ".join([lemmatizer.lemmatize(i) for i in re.sub("[^a-zA-Z]", " " ,new.lower()).split() if i not in stop_words]).split() asrs['cleaned']=asrs['Narrative'].apply(clean) ###Output _____no_output_____ ###Markdown Guided LDA ###Code import guidedlda as glda #defining priors fron the research paper 13 lights = ['light', 'illuminated', 'caution', 'master', 'lights', 'panel', 'overhead','checklist', 'warning', 'maint'] passenger = ['flt', 'pax', 'attendants','attendant', 'seat', 'turbulence','fa' 'seated', 'attendant','hit', 'cabin'] avoiding_ground = ['terrain', 'ground', 'gpws', 'warning', 'approach', 'pull', 'climb', 'received', 'maneuver', 'approximately', 'atc'] from sklearn.feature_extraction.text import CountVectorizer docs = asrs['Narrative'].tolist() vectorizer = CountVectorizer() X = vectorizer.fit_transform(docs) word2id=vectorizer.vocabulary_ lights = [x for x in lights if x in list(word2id.keys())] passenger = [x for x in passenger if x in list(word2id.keys())] avoiding_ground = [x for x in avoiding_ground if x in list(word2id.keys())] seed_topic_list = [lights,passenger,avoiding_ground] model = glda.GuidedLDA(n_topics=5, n_iter=2000, random_state=7, refresh=20,alpha=0.01,eta=0.01) seed_topics = {} for t_id, st in enumerate(seed_topic_list): for word in st: seed_topics[word2id[word]] = t_id model.fit(X, seed_topics=seed_topics, seed_confidence=0.15) ###Output INFO:guidedlda:n_documents: 500 INFO:guidedlda:vocab_size: 7154 INFO:guidedlda:n_words: 121786 INFO:guidedlda:n_topics: 5 INFO:guidedlda:n_iter: 2000
lecture4/lecture4.ipynb
###Markdown NICO2AI 4回 scikit-learn入門(18/01/27) 4.1 オブジェクト指向これまでクラスやオブジェクト指向を意識せずにコーディングをしてきましたが、本日からはオブジェクト指向の考え方を導入して、実装を進めていきます。 ###Code # 一度変数を初期化しておきます「yを押してenterしてください」 %reset # A. 今日使うパッケージのインポート import os import numpy as np import matplotlib.pyplot as plt import matplotlib.font_manager as fm from matplotlib.colors import LogNorm from sklearn import datasets # B. 実行上問題のないWarningは非表示にする import warnings warnings.filterwarnings('ignore') # C1. Plotを描画できるようにする %matplotlib inline # C2. デフォルトのPlotスタイルシートの変更 plt.style.use('ggplot') plt.rcParams['ytick.color'] = '111111' plt.rcParams['xtick.color'] = '111111' plt.rcParams['axes.labelcolor'] = '111111' plt.rcParams['font.size'] = 15 ###Output _____no_output_____ ###Markdown クラス設計の例動物クラスを作成し、クラス変数(メンバ変数)に、動物の種類名、鳴き声を用意。メソッドとして、コンストラクタ(オブジェクト生成時に呼ばれる関数)、と、鳴き声のセット、鳴くという3つのメソッドを実装し、オブジェクトを生成して、犬をワンと鳴かせます。※スライドで見せたC言語の例とは少し変えて、鳴き声は関数でセットすることにしました。※クラスというのは雛形、オブジェクトは雛形からできた物体です。よく、たい焼き機がクラス、たい焼きがオブジェクトと説明されます。 ###Code class Animal(object): """動物のクラスを作成します""" def __init__(self, name1): """コンストラクタです、クラスが初めて生成されたときに実行されます""" self.name = name1 # 名前 self.cry = '鳴き声未定義です' # 鳴き声は未定義という文字を入れておく def set_cry(self, input_cry): """鳴き声のセット""" self.cry = input_cry # 鳴き声 def sound(self): """sound the animal's cry""" print('鳴き声:' + self.cry) DOG = Animal('犬') # Animalクラスで引数(name1=犬)というオブジェクトDOGを生成 print(DOG.name) # .でメンバ変数にアクセスできる print(DOG.cry) # 鳴き声は未定義になっています DOG.set_cry('ワン') # DOGの鳴き声をワンになるように設定します print(DOG.name) print(DOG.cry) DOG.sound() #DOGオブジェクトのsound()メソッドを実行 ###Output _____no_output_____ ###Markdown 練習クイズ線形回帰を実行するクラス、Original_LS_regressionを作成せよ。クラス変数(メンバ変数)は、theta_LSとする。メソッドは、コンストラクタ、fit(X,t)、y = predict(plt_X)の3つとせよ(10分間)。※ヒント y = predict()では、関数の値を出力するのにreturnを使ってあげましょう。 ###Code # データの作成 rnd = np.random.RandomState(0) # 全員同じ結果が得られるように、乱数のseedを固定します N_TRAIN = 30 # 訓練データ数 x = np.linspace(-3, 3, N_TRAIN) t = 0.5 * x + 0.2 + 0.1 * rnd.randn(N_TRAIN) # ガウシアンノイズを持つデータ点を生成 # 2. バイアス項 (常に係数が1) を列として加えた行列Xの作成 b = np.ones(N_TRAIN) X = np.stack((x, b), axis=1) # 自作の線形回帰クラスを宣言 # WRITE ME! # Answer # 自作の線形回帰クラスを宣言 # 実行 my_reg = OriginalLsRegression() # オブジェクト生成 my_reg.fit(X, t) # データを使ってfitメソッドを実行 print(my_reg.theta_ls) # クラスのパラメータを表示 y_pred = my_reg.predict(X) # 予測 y=θ_LS・Xの計算 # データ生成関数(yの直線),予測関数(y_predの直線)、及びデータ点tを描画 y = 0.5 * x + 0.2 plt.figure(figsize=(8, 6)) plt.title("linear regression using original Class") # タイトル plt.plot(x, y_pred, color="b", label="Predicted function", linewidth=2) # ラベルをつける plt.plot(x, y, color="r", label="Data generation function", linewidth=2) # ラベルをつける plt.scatter(x, t, marker="x", color="r", label="Training points") # ラベルをつける plt.xlim(-3, 3) # xlim(最小値、最大値) plt.ylim(-2, 2) # ylim(最小値、最大値) plt.xlabel("Input") plt.ylabel("Output") plt.legend(loc="lower right") # loc引数を指定することで、凡例の出現位置を制御できる plt.tight_layout() # グラフから文字がはみ出たりした場合に自動調整してくれる plt.show() ###Output _____no_output_____ ###Markdown 4.2 デバッグ手法pdbデバッグをはじめとした、デバッグ手法を学びます。プログラムが複雑になってくると、バグの原因を見つけるのが難しくなってきます。そこで今回はバグを取る方法をいくつか紹介します。- **print文デバッグ:**怪しい箇所にprint文を挟み、変数を確認する方法。どんなプログラム言語でも有効な手段であり、よく使う。- **pdb:**python特有のデバッグ用ツール。インテラクティブに挙動を調べることができるので便利。 4.2.1 printデバッグprintデバッグは、変数を調べたいと思う箇所にprint文で変数の中身を出力させ想定している変数と合致するか確かめる方法です。- 単純に変数を出力 print(DATA)- データのタイプを調べたい時 type(DATA)- listの大きさを調べたい時 len(DATA)- numpy配列の大きさを調べたい時 DATA.shape ###Code DATA = [1, 2, 3, 4, 5, 6] # 作戦1 print(DATA) # 作戦2 print(len(DATA)) # 作戦3 print(type(DATA)) DATA = np.zeros([10, 4]) # 作戦4 print(DATA.shape) ###Output _____no_output_____ ###Markdown 4.2.2 pdbデバッグここで、pdbというデバッグ用のライブラリを紹介します。これはpythonに標準装備されているライブラリです。エラー発生箇所の手前や変数の挙動を見たい位置に``pdb.set_trace()``という関数を挿入します。すると、挿入した箇所でいったんプログラムを中断すると同時に、インタラクティブなインターフェースが出てきます。例えば下の例で変数を打ち込んで中身を確かめてみてください。**※pdbを終了する時は必ずq(quit) or c(continue)とコマンドを打って終了するようにしてください** pdbのコマンドの使い方- a :現在の関数の引数を表示。上の例では、x,y- l :現在の行の周辺のコードを表示。- p 式 :式の内容を表示。例えば p zやp xなど- n :次の行を実行する- q :pdbを終了する- c :つぎのブレークポイントまで実行- 他に普通にprintなどを実行することができます。他にもコマンドがあるので調べてみてください。以下の例で、コメントアウトを外し、各コマンドを実行してみてください。 ※Colaboratoryの環境だとうまくいかないときありますなんか q うまくいかんな・・・最後の行でpdb.set_trace()とエラーが出る場合もあり ###Code import pdb def plus(x=2, y=3): """This is a sum-method.""" x = 2 y = 3 z = x + y pdb.set_trace() # これ print("%d + %d = %d" % (x, y, z)) plus() # x # pp x+y # n # x > 0 # c ###Output _____no_output_____ ###Markdown 練習クイズ次のコードは前回線形回帰をするときに使ったコードですが、一部間違った箇所があり実行できません。エラー文を読みつつデバッグしてみましょう。(6分間)ヒント:numpyのarryayの形を見る関数は 変数名.shape です。 ###Code import numpy as np rnd = np.random.RandomState(1701) N_TRAIN = 25 # 訓練データ数 # ガウシアンノイズを持つデータ点を生成 train_x = np.linspace(-3, 3, N_TRAIN) train_y = 0.5 * train_x + 0.2 + 0.2 * rnd.randn(len(train_x)) # X = (train_xにバイアス項1の列ベクトルを追加) X = np.stack((train_x, np.ones(len(train_x)))) # theta = (X^T X)^{-1} X^T y (方程式を解く) theta = np.linalg.solve(np.dot(X.T, X), np.dot(X.T, train_y)) print(theta) # Answer # Answer ###Output _____no_output_____ ###Markdown 4.3 scikit-learn 回帰機械学習のライブラリ、scikit-learnについて勉強します。scikit-learnでは基本的な流れとして1. データの用意2. モデルの選択3. 訓練データのフィッティング(fit)4. テストデータの予測(predict)という4ステップで実行することができます。 scikitlearnで線形回帰を書いてみる基礎演習では線形回帰を例に見てみましょう**1.まずデータを用意します** ###Code # 1. データの用意 rnd = np.random.RandomState(1701) n_train = 25 # 訓練データ数 # ガウシアンノイズを持つデータ点を生成 train_x = np.linspace(-3, 3, n_train) train_y = 0.5 * train_x + 0.2 + 0.2 * rnd.randn(len(train_x)) # ※sciki-learnで使用するときは、ベクトル形式にして使用します。 print(train_x.shape) train_x = train_x.reshape(-1,1) # reshape(-1,1)は行は要素数だけ、列は1にするという命令 train_y = train_y.reshape(-1,1) print(train_x.shape) ###Output _____no_output_____ ###Markdown **2.モデルを選択します**線形回帰には``sklearn.linear_model.LinearRegression``という関数を使います。パラメータを指定したい時などはここで指定してあげます。 ###Code # 2. モデルの選択 from sklearn import linear_model clf = linear_model.LinearRegression() ###Output _____no_output_____ ###Markdown **3.フィッティングさせます** ###Code # 3. モデルのフィッティング clf.fit(train_x, train_y) # 線形回帰では予測よりも実際の直線を知りたいことが多いですが、 その時は次のようにします print("係数={}".format(clf.coef_)) print("バイアス={}".format(clf.intercept_)) ###Output _____no_output_____ ###Markdown **4.テストデータの予測をします** ###Code # 4. テストデータを予測 test_x = [[-2.25], [-1], [0.5], [2.3]] predict = clf.predict(test_x) print(predict) # 訓練データ、テストデータをプロットしてみる # 生成関数 y = 0.5 * train_x + 0.2 plt.plot(train_x, y, color="r", label="original") # 訓練データ plt.scatter(train_x, train_y, marker="x", color="r") plt.plot(train_x, clf.predict(train_x.reshape(-1, 1)), color="b", label="regression") # 予測 plt.scatter(test_x, predict, color="b") plt.xlim(-3, 3) # xlim(最小値、最大値) plt.ylim(-2, 2) # ylim(最小値、最大値) plt.xlabel("Input") plt.ylabel("Output") plt.legend(loc="lower right") # loc引数を指定することで、凡例の出現位置を制御できる plt.tight_layout() # グラフから文字がはみ出たりした場合に自動調整してくれる plt.show() ###Output _____no_output_____ ###Markdown 練習クイズ以下のデータ(x, t)に対して、scikit-learnの最小二乗法による線形近似により、傾きと切片の係数を求めよ。(5分間) ###Code # 1. データの作成 rnd = np.random.RandomState(0) # 全員同じ結果が得られるように、乱数のseedを固定します N_TRAIN = 30 # 訓練データ数 x = np.linspace(-3, 3, N_TRAIN) t = 5.5 * x + 1.2 + 2.0 * rnd.randn(N_TRAIN) # ガウシアンノイズを持つデータ点を生成 # 2. モデルの選択 # write me! # 3. モデルのフィッティング # write me! # 4. 結果 print(clf.coef_) # 傾き print(clf.intercept_) # 切片 # Answer # 訓練データ、テストデータをプロットしてみる # 生成関数 y = 5.5 * x + 1.2 plt.plot(x, y, color="r", label="original") # 訓練データ plt.plot(x, clf.predict(x.reshape(-1, 1)), color="b", label="estimate") plt.scatter(x, t, color="r") # テストデータを予測 test_x = [[-2.25], [-1], [0.5], [2.3]] predict = clf.predict(test_x) plt.scatter(test_x, predict, color="b") plt.legend(loc="lower right") # loc引数を指定することで、凡例の出現位置を制御できる plt.tight_layout() # グラフから文字がはみ出たりした場合に自動調整してくれる plt.show() ###Output _____no_output_____ ###Markdown 公式サイトを見てみよう線形回帰に限らず、公式サイトに詳しい解説が載っているので、使い方がよくわからなくなったら公式サイトをチェックするようにしましょう例)引き数なんやっけ?バイアスの変数名って何やっけ?どんな関数持ってたっけ?http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html 正規化、テストデータとの分割 ###Code from sklearn import preprocessing, cross_validation # 正規化 # Dataをxとする x = np.linspace(-3, 3, N_TRAIN) x = x.reshape(-1, 1) print(x) sc=preprocessing.StandardScaler() sc.fit(x) # 平均と標準偏差を計算 X_std=sc.transform(x) # 正規化実施 print(X_std) # テストデータと分割 # labelをzとする # 生成関数 y = 5.5 * x + 1.2 X_train, X_test, train_label, test_label = cross_validation.train_test_split(X_std, y, test_size=0.1, random_state=0) print(X_train.shape) print(train_label.shape) ###Output _____no_output_____ ###Markdown 4.5 scikit-learnで教師あり学習scikit-learnを使用して、教師あり学習による識別を実装します。アルゴリズムマップを見てみましょう。http://scikit-learn.org/stable/tutorial/machine_learning_map/ 学習データとバリデーションデータに分けずに話を進めます。 4.5.1 線形SVMによる識別 ###Code # 1:ライブラリのインポート-------------------------------- import numpy as np import pandas as pd import matplotlib.pyplot as plt #プロット用のライブラリを利用 from sklearn import neighbors, metrics, preprocessing, cross_validation #機械学習用のライブラリを利用 # 2:データを作成する np.random.seed(0) X = np.random.randn(200, 2) true_false = (X[:, 0] ) > 0 # データXの0行目が正なら true y = np.where(true_false, 1, 0) # print(pd.DataFrame(X).head()) # この行を実行するとデータが見れる print(pd.DataFrame(y).head()) # この行を実行するとデータが見れる # 3:プロットしてみる------------------------------------------------------ plt.scatter(X[y==1, 0], X[y==1, 1], c='r', marker='x', label='1') plt.scatter(X[y==0, 0], X[y==0, 1], c='b', marker='s', label='0') plt.legend(loc='best') plt.show # 4:識別器を作成する----------------------------------------------------- from sklearn import svm clf = svm.LinearSVC() clf.fit(X, y) ###Output _____no_output_____ ###Markdown SVCのハイパーパラメータを見てみようhttp://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html ハイパーパラメータを決めるこつ10倍刻みなどの大きな幅でまずサーチするgridsearchなどもscikit-learnにはある。 ###Code # 5: 適当な値で予測してみる----------------------------------------------------- X_test = [[1, 1]] print(clf.predict(X_test)) # 6: 識別平面の描画----------------------------------------------------- # http://scikit-learn.org/stable/auto_examples/ensemble/plot_voting_decision_regions.html#sphx-glr-auto-examples-ensemble-plot-voting-decision-regions-py x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) f, axarr = plt.subplots(1, 1) axarr.contourf(xx, yy, Z, alpha=0.4) plt.scatter(X[y==1, 0], X[y==1, 1], c='r', marker='x', label='1') plt.scatter(X[y==0, 0], X[y==0, 1], c='b', marker='s', label='0') ###Output _____no_output_____ ###Markdown 練習クイズ以下のデータに対して、識別平面を描画し、データ(0,0)の識別結果を求めよ。 ###Code #データ np.random.seed(0) X = np.random.randn(200, 2) true_false = (X[:, 0] + 0.2*X[:, 1]) > 0.3 # 斜めの境界を作成 y = np.where(true_false, 1, 0) plt.scatter(X[y==1, 0], X[y==1, 1], c='r', marker='x', label='1') plt.scatter(X[y==0, 0], X[y==0, 1], c='b', marker='s', label='0') plt.legend(loc='best') plt.show # WRITE ME! X_test = [[0, 0]] print(clf.predict(X_test)) # Answer # 識別平面の描画----------------------------------------------------- # http://scikit-learn.org/stable/auto_examples/ensemble/plot_voting_decision_regions.html#sphx-glr-auto-examples-ensemble-plot-voting-decision-regions-py x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) f, axarr = plt.subplots(1, 1) axarr.contourf(xx, yy, Z, alpha=0.4) plt.scatter(X[y==1, 0], X[y==1, 1], c='r', marker='x', label='1') plt.scatter(X[y==0, 0], X[y==0, 1], c='b', marker='s', label='0') ###Output _____no_output_____ ###Markdown 一度スライドでSVCの説明をします。スライドへ 4.4.2 カーネルSVMによる識別 線形には識別できない ###Code # XORのデータを作成する np.random.seed(0) X = np.random.randn(200, 2) true_false = (X[:, 0] * X[:, 1]) > 0 y = np.where(true_false, 1, 0) # プロットしてみる------------------------------------------------------ plt.scatter(X[y==1, 0], X[y==1, 1], c='r', marker='x', label='1') plt.scatter(X[y==0, 0], X[y==0, 1], c='b', marker='s', label='0') plt.legend(loc='best') plt.show # 線形SVMをしてみる from sklearn import svm clf = svm.LinearSVC() clf.fit(X, y) X_test = [[0, 0]] print(clf.predict(X_test)) # 識別平面の描画----------------------------------------------------- # http://scikit-learn.org/stable/auto_examples/ensemble/plot_voting_decision_regions.html#sphx-glr-auto-examples-ensemble-plot-voting-decision-regions-py x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) f, axarr = plt.subplots(1, 1) axarr.contourf(xx, yy, Z, alpha=0.4) plt.scatter(X[y==1, 0], X[y==1, 1], c='r', marker='x', label='1') plt.scatter(X[y==0, 0], X[y==0, 1], c='b', marker='s', label='0') ###Output _____no_output_____ ###Markdown 線形ではきちんと、赤と青が分離できない・・・そこでカーネルSVMを使用するhttps://www.youtube.com/watch?v=3liCbRZPrZA&feature=youtu.be ###Code # kernel-SVMをしてみる from sklearn import svm clf = svm.SVC(kernel='rbf', C=1.0, gamma =1/2) # 各引数は各自のちほどscikit-learnのページで調べてみよう! clf.fit(X, y) # 識別平面の描画----------------------------------------------------- # http://scikit-learn.org/stable/auto_examples/ensemble/plot_voting_decision_regions.html#sphx-glr-auto-examples-ensemble-plot-voting-decision-regions-py x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) f, axarr = plt.subplots(1, 1) axarr.contourf(xx, yy, Z, alpha=0.4) plt.scatter(X[y==1, 0], X[y==1, 1], c='r', marker='x', label='1') plt.scatter(X[y==0, 0], X[y==0, 1], c='b', marker='s', label='0') ###Output _____no_output_____ ###Markdown スライドへ戻る 4.5 scikit-learnで教師なし学習scikit-learnを使用して、教師なし学習のkMeansによる識別を実装します。http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html 以下のDATAに対して、scikit-learnのKMeansを用いて、2つのクラスターにデータを分類します注意:事前にデータを正規化すること ###Code # 1:2次元正規分布データ作成 mean = [10, 20] a = 1 * 10 b = 0.3 *10 cov = [[a, b], [b, a]] # covariance x, y = np.random.multivariate_normal(mean, cov, 100).T y = y *100 # 2:2次元正規分布データ作成 mean = [20, 10] a = 0.8 * 10 b = 0.4 *10 cov = [[a, b], [b, a]] # covariance x2, y2 = np.random.multivariate_normal(mean, cov, 100).T y2 = y2 *100 # 3: プロット X=(np.r_[x, x2]) Y=(np.r_[y, y2]) DATA = np.stack((X, Y), axis=1) plt.scatter(X,Y, marker='o',s = 30,c='gray',edgecolors='') plt.show() # k-Means # 1:ライブラリのインポート-------------------------------- from sklearn import cluster, preprocessing #機械学習用のライブラリを利用 # 2: データの正規化 sc=preprocessing.StandardScaler() DATA_std= sc.fit_transform(DATA) # 3: kMeans----------------------------- km=cluster.KMeans(n_clusters=2) z_km=km.fit(DATA_std) plt.scatter(X, Y, marker='o',s = 30,c=z_km.labels_) plt.show() # 4 : 正しいか判定 print("要素数={}".format(sum(z_km.labels_[:100]))) # 100か0ならOK # 正則化をなしにしてみよう # 1:ライブラリのインポート-------------------------------- from sklearn import cluster, preprocessing #機械学習用のライブラリを利用 # 2: データの正規化 #sc=preprocessing.StandardScaler() #DATA_std= sc.fit_transform(DATA) # 3: kMeans----------------------------- km=cluster.KMeans(n_clusters=2) z_km=km.fit(DATA) plt.scatter(X, Y, marker='o',s = 30,c=z_km.labels_) plt.show() # 4: 正しいか判定 print("要素数={}".format(sum(z_km.labels_[:100]))) # 100か0ならOK ###Output _____no_output_____ ###Markdown 4.6 scikit-learnで次元圧縮scikit-learnを使用して、主成分分析PCAによる次元圧縮を実装します。http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html 練習Irisの4次元データに対して、scikit-learnのPCAを用いて、2次元にデータを圧縮する。 ###Code from sklearn import datasets iris = datasets.load_iris() # irisのデータは4次元です iris.feature_names X = iris.data Y = iris.target # 最初の2次元だけで可視化すると、混ざっている plt.scatter(X[:, 0], X[:, 1], c=Y) plt.xlabel('sepal length') plt.ylabel('sepal width') plt.show # 1. PCA用のライブラリをimport from sklearn import decomposition, preprocessing # 2:データの正規化------------------------------------ sc = preprocessing.StandardScaler() sc.fit(X) X = sc.transform(X) # 3:主成分分析を実施------------------------------- pca = decomposition.PCA(n_components=2) X_transformed = pca.fit_transform(X) # 4: 主成分分析の結果----------------------------- print("主成分の分散説明率") print(pca.explained_variance_ratio_) print("固有ベクトル") print(pca.components_) # 5: 結果をプロットする----------------------------- %matplotlib inline plt.scatter(X_transformed[:, 0], X_transformed[:, 1], c=Y) plt.xlabel('PC1') plt.ylabel('PC2') ###Output _____no_output_____ ###Markdown 演習4 Nilearn: Machine learning for Neuro-Imaging in Python NilearnのfMRIのBOLD信号データから、被験者が見ている画像を分類しましょうhttp://nilearn.github.io/auto_examples/plot_decoding_tutorial.htmlsphx-glr-auto-examples-plot-decoding-tutorial-py Haxby, James V., et al. "Distributed and overlapping representations of faces and objects in ventral temporal cortex." Science 293.5539 (2001): 2425-2430.のfMRIのデータから被験者が見ていたものを予測します。 Colaboratoryで前準備(今回は実行しません。ほぼ元のチュートリアル通りです。) Colaboratoryにはnillearnライブラリはデフォルトでは入っていません。pipコマンドを使用し、自分でnilearnのライブラリをインストールする必要があります。 ###Code !pip install nilearn import nilearn # データのダウンロード from nilearn import datasets # By default 2nd subject will be fetched haxby_dataset = datasets.fetch_haxby() # 'func' is a list of filenames: one for each subject fmri_filename = haxby_dataset.func[0] # print basic information on the dataset print('First subject functional nifti images (4D) are at: %s' % fmri_filename) # 4D data haxby_dataset # 可視化する from nilearn.plotting import plot_stat_map, show # The mask is a mask of the Ventral Temporal streaming coming from the Haxby study mask_filename = haxby_dataset.mask_vt[0] # Let's visualize it, using the subject's anatomical image as a # background from nilearn import plotting plotting.plot_roi(mask_filename, bg_img=haxby_dataset.anat[0], cmap='Paired') show() # 4次元データを2次元に変更する # http://nilearn.github.io/building_blocks/manual_pipeline.html#masking from nilearn.input_data import NiftiMasker masker = NiftiMasker(mask_img=mask_filename, standardize=True) # We give the masker a filename and retrieve a 2D array ready # for machine learning with scikit-learn fmri_masked = masker.fit_transform(fmri_filename) print(fmri_masked) # 行と列のサイズ確認 print(fmri_masked.shape) # 出力は(1452L, 464L) # 何Hzでサンプリングしてる? たぶん2Hz? # 各時刻ごとに何を見ていたのかをラベル化 import pandas as pd # Load behavioral information behavioral = pd.read_csv(haxby_dataset.session_target[0], sep=" ") print(behavioral) # 出力は[1452 rows x 2 columns]になる  # 何を見ていたかだけが欲しいので取り出す conditions = behavioral['labels'] print(conditions) # 家とネコのデータだけ取り出す condition_mask = conditions.isin(['face', 'cat']) # We apply this mask in the sampe direction to restrict the # classification to the face vs cat discrimination # fMRIのデータ fmri_masked = fmri_masked[condition_mask] print(fmri_masked.shape) # 見ていた家かネコかのデータ conditions = conditions[condition_mask] print(conditions.shape) # 出力(216L, 464L) # 見ていたのが、家かネコかのデータ conditions = conditions[condition_mask] print(conditions.shape) # 出力(216L,) from sklearn import preprocessing, cross_validation # テストデータを用意 X_test = fmri_masked[-40:] test_label = conditions[-40:] X_tmp = fmri_masked[:-40] tmp_label = conditions[:-40] # 保存をする import pandas as pd df1 = pd.DataFrame(fmri_masked) df2 = pd.DataFrame(conditions) df1.to_csv('fmri_masked_all.csv') df2.to_csv('conditions_all.csv') pd.DataFrame(X_test).to_csv('fmri_masked_test.csv') pd.DataFrame(test_label).to_csv('conditions_test.csv') pd.DataFrame(X_tmp).to_csv('fmri_masked.csv') pd.DataFrame(tmp_label).to_csv('conditions.csv') ###Output _____no_output_____ ###Markdown ここから実行可能です ###Code import os !wget "https://drive.google.com/uc?export=download&id=1k7nQFYg9UKiRFP5cHWqNjdGaTpXBx1jn" -O Haxby_fMRI_data.zip !unzip -o Haxby_fMRI_data.zip # 例 1 # データ読み込み import pandas as pd conditions = pd.read_csv('Haxby_fMRI_data/conditions.csv') fmri_masked = pd.read_csv('Haxby_fMRI_data/fmri_masked.csv') import pandas as pd from sklearn import cross_validation # 学習データとバリデーションデータに分ける X_learn, X_val, learn_label, val_label = cross_validation.train_test_split(fmri_masked.iloc[:,1:], conditions['labels'], test_size=0.5, random_state=0) # ilocは1行目にindex入っているのを消したい # 識別器の作成 from sklearn.svm import SVC clf = SVC(kernel='linear', C=1.0) # ハイパーパラメータやkernel、その他手法そのものを変えてみよう clf.fit(X_learn, learn_label) # 分類 prediction=clf.predict(X_val) # validationでの正答率 print((prediction == val_label).sum() / float(len( val_label))) # 例 2 # データ読み込み import pandas as pd conditions = pd.read_csv('Haxby_fMRI_data/conditions.csv') fmri_masked = pd.read_csv('Haxby_fMRI_data/fmri_masked.csv') import pandas as pd from sklearn import cross_validation #データ整理 X = fmri_masked.iloc[:,1:] label = conditions['labels'] # 識別器の作成 from sklearn.svm import SVC clf = SVC(kernel='linear', C=1.0) # ハイパーパラメータやkernel、その他手法そのものを変えてみよう # K分割交差検証(cross validation)で性能を評価する(K=3) # 第2回講義のInner Cross Validationに相当 scores=cross_validation.cross_val_score(clf, X, label, cv=3) print("平均正解率 = ", scores.mean()) print("正解率の標準偏差 = ", scores.std()) # 例 3 conditions = pd.read_csv('Haxby_fMRI_data/conditions.csv') fmri_masked = pd.read_csv('Haxby_fMRI_data/fmri_masked.csv') import pandas as pd from sklearn import cross_validation # 学習データとバリデーションデータに分ける 後ろ1/3をvalidationdataにする X = fmri_masked.iloc[:,1:] label = conditions['labels'] learn_points = int(len(X)/3) X_learn = X[:learn_points] X_val = X[learn_points:] learn_label= label[:learn_points] val_label= label[learn_points:] # 識別器の作成 from sklearn.svm import SVC clf = SVC(kernel='linear', C = 1.0, gamma=1/200) # ハイパーパラメータやkernel、その他手法そのものを変えてみよう clf.fit(X_learn, learn_label) # 分類 prediction=clf.predict(X_val) # validationでの正答率 print((prediction == val_label).sum() / float(len( val_label))) # write it!! # --------------------------------------------------------------------------- # コンテストで利用するデータの形式は、 # conditions = pd.read_csv('Haxby_fMRI_data/conditions.csv') # fmri_masked = pd.read_csv('Haxby_fMRI_data/fmri_masked.csv') # で読み込まれるものと同じです(一番下のセルのコンテスト開催!を実行します)。 # clfという名前で識別器を作成していただくと、そのまま実行できて便利です) # --------------------------------------------------------------------------- # コンテスト開催!! # リンクコードを追記して実行可能にする import os !wget "https://drive.google.com/uc?export=download&id=??????????????????????" -O Haxby_fMRI_all_data.zip !unzip -o Haxby_fMRI_all_data.zip # データ読み込み conditions = pd.read_csv('Haxby_fMRI_all_data/conditions_test.csv') fmri_masked = pd.read_csv('Haxby_fMRI_all_data/fmri_masked_test.csv') X = fmri_masked.iloc[:,1:] label = conditions['labels'] # 分類 prediction=clf.predict(X) # validationでの正答率 print((prediction == label).sum() / float(len(label))) ###Output _____no_output_____
6 - Dockerize Functions.ipynb
###Markdown Serialize Python Functions as Docker imagesHere a python decorator is created that perform the following steps:- Serialize the function that is decorated- Create a docker image around the function ###Code import dill import os import tempfile import docker # *args, **kwargs """ docker_image is an annotation that make a python method runnable in a container @param image: name of the image to create """ def docker_image(image_name): FUNCTION_FILE = "main.txt" MAIN_SCRIPT = "my_script.py" # Serialize function into root_directory def serialize_func(func, root_directory): ser = dill.dumps(func) f = open(os.path.join(root_directory, FUNCTION_FILE), "wb") f.write(ser) f.close() def create_main_script(root_directory): my_script = f""" import dill f = open('{FUNCTION_FILE}', "rb") main_function = dill.load(f) f.close() main_function() """ f = open(os.path.join(root_directory, MAIN_SCRIPT), "w") f.write(my_script) f.close() def create_docker_file(root_directory): Dockerfile = f""" FROM python:3.7.6-alpine3.10 RUN pip install dill ADD {MAIN_SCRIPT} / ADD {FUNCTION_FILE} / ENTRYPOINT [ "python3", "./{MAIN_SCRIPT}" ] """ f = open(os.path.join(root_directory, "Dockerfile"), "w") f.write(Dockerfile) f.close() def docker_build(client, image_name, root_directory): client.images.build( path=root_directory, rm=True, tag=image_name) def docker_push(client, image_name): client.images.push(image_name, stream=True, decode=True) def inner(func): root_directory = tempfile.mkdtemp() serialize_func(func, root_directory) create_main_script(root_directory) create_docker_file(root_directory) client = docker.DockerClient(base_url='unix://var/run/docker.sock') docker_build(client, image_name, root_directory) docker_push(client, image_name) return inner @docker_image(image_name = "localhost:32000/library/volume") def func(): import sys print("This is the name of the script: ", sys.argv[0]) print("Number of arguments: ", len(sys.argv)) print("The arguments are: " , str(sys.argv)) print("Inside actual function") import glob print(glob.glob("/mounting/*")) !docker run -v $(pwd):/mounting localhost:32000/library/volume Hello! ###Output This is the name of the script: ./my_script.py Number of arguments: 2 The arguments are: ['./my_script.py', 'Hello!'] Inside actual function ['/mounting/run-notebook.sh', '/mounting/5 - Workflow - Argo With Parameters.ipynb', '/mounting/README.md', '/mounting/test.yaml', '/mounting/4 - Workflow - DAG.ipynb', '/mounting/3 - Workflow - Single Task.ipynb', '/mounting/Main.ipynb', '/mounting/Serialize Functions.ipynb', '/mounting/rmi-advanced.ipynb', '/mounting/FreeSurfer Executor.ipynb', '/mounting/1 - Minio Connection.ipynb', '/mounting/data', '/mounting/endpoint.yaml', '/mounting/LICENSE', '/mounting/ubuntu.txt', '/mounting/argo-test.yaml', '/mounting/test-pvc.yaml', '/mounting/rmi.ipynb', '/mounting/2 - Create Docker Image.ipynb', '/mounting/service.yaml', '/mounting/license.txt', '/mounting/minio-deployment.yaml']
Beginner Level Task/Stock Market Prediction/Stock Market Prediction data Analysis.ipynb
###Markdown NSE-TATAGLOBAL DATASETS Stock Market Prediction And Forecasting Using Stacked LSTM LGMVIP Task-2|| Data Science To build the stock price prediction model, we will use the NSE TATA GLOBAL dataset. This is a dataset of Tata Beverages from Tata Global Beverages Limited, National Stock Exchange of India: Tata Global Dataset Import Libraries ###Code import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import io import requests import datetime ###Output _____no_output_____ ###Markdown Import Datasets ###Code url="https://raw.githubusercontent.com/mwitiderrick/stockprice/master/NSE-TATAGLOBAL.csv" df=pd.read_csv(url) df.head() df ###Output _____no_output_____ ###Markdown Shape of data ###Code df.shape ###Output _____no_output_____ ###Markdown Gathering information about the data ###Code df.info() df.describe() df.dtypes ###Output _____no_output_____ ###Markdown Data Cleaning Total percentage of data is missing ###Code missing_values_count = df.isnull().sum() total_cells = np.product(df.shape) total_missing = missing_values_count.sum() percentage_missing = (total_missing/total_cells)*100 print(percentage_missing) NAN = [(c, df[c].isnull().mean()*100) for c in df] NAN = pd.DataFrame(NAN, columns=['column_name', 'percentage']) NAN ###Output _____no_output_____ ###Markdown Data Visualisation ###Code sns.set(rc = {'figure.figsize': (20, 5)}) df['Open'].plot(linewidth = 1,color='blue') df.columns cols_plot = ['Open','High','Low','Last','Close'] axes = df[cols_plot].plot(alpha = 1, figsize=(20, 30), subplots = True) for ax in axes: ax.set_ylabel('Variation') ###Output _____no_output_____ ###Markdown Sort the dataset on date time and filter “Date” and “Open” columns ###Code df["Date"]=pd.to_datetime(df.Date,format="%Y-%m-%d") df.index=df['Date'] df del df["Date"] df df.dtypes ###Output _____no_output_____ ###Markdown 7 day rolling mean ###Code df.rolling(7).mean().head(10) df['Open'].plot(figsize=(20,8),alpha = 1) df.rolling(window=30).mean()['Close'].plot(alpha = 1) df['Close: 30 Day Mean'] = df['Close'].rolling(window=30).mean() df[['Close','Close: 30 Day Mean']].plot(figsize=(20,8),alpha = 1) ###Output _____no_output_____ ###Markdown Optional specify a minimum number of periods ###Code df['Close'].expanding(min_periods=1).mean().plot(figsize=(20,8),alpha = 1) training_set=df['Open'] training_set=pd.DataFrame(training_set) ###Output _____no_output_____ ###Markdown Feaure Scaling ###Code from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0,1)) training_set_df = scaler.fit_transform(training_set) ###Output _____no_output_____ ###Markdown Creating a data structure with 60 timeStamps and 1 output ###Code train_x = [] train_y = [] for i in range(60,2035): train_x.append(training_set_df[i-60:i,0]) train_y.append(training_set_df[i,0]) train_x, train_y = np.array(train_x),np.array(train_y) ###Output _____no_output_____ ###Markdown Reshaping ###Code train_x = np.reshape(train_x, ( train_x.shape[0], train_x.shape[1],1)) train_x.shape ###Output _____no_output_____ ###Markdown Building The Model ###Code from keras.models import Sequential from keras.layers import LSTM from keras.layers import Dropout from keras.layers import Dense # initialise the Model lstm_model=Sequential() ###Output _____no_output_____ ###Markdown Add Input Layer & Regularization ###Code # adding the first LSTM layer and add some Dropout Regularization lstm_model.add(LSTM(units=50,return_sequences=True,input_shape=(train_x.shape[1],1))) lstm_model.add(Dropout(0.2)) # adding the second LSTM layer and add some Dropout Regularization lstm_model.add(LSTM(units=50,return_sequences=True)) lstm_model.add(Dropout(0.2)) # adding the third LSTM layer and add some Dropout Regularization lstm_model.add(LSTM(units=50,return_sequences=True)) lstm_model.add(Dropout(0.2)) ###Output _____no_output_____ ###Markdown Add output Layer ###Code lstm_model.add(Dense(units = 1)) ###Output _____no_output_____
MNIST-BayesianAE.ipynb
###Markdown Prepare the Dataset ###Code dataset_name = 'MNIST' import numpy as np from keras.datasets import mnist (X, y), (X_test, y_test) = mnist.load_data() X = np.concatenate((X, X_test)) y = np.concatenate((y, y_test)) imgs = X del X_test del y_test print('Dataset size {}'.format(X.shape)) ###Output Using TensorFlow backend. ###Markdown BayesianAE ###Code %load_ext autoreload %autoreload 2 from utils.constants import Models as models from models.AE import AE ae = AE(model_type=models.BayAE, dataset_name=dataset_name,hidden_dim=500, plot=True, isConv=False) #ae.fit(X,y) from utils.plots import plot_samples, merge from skimage.transform import resize import matplotlib.pyplot as plt for _ in range(5): samples = ae.reconst_samples_out_data() scale = 10 im = merge(samples, (10,10)) fig_width = int(im.shape[0] * scale) fig_height = int(im.shape[1] * scale) im = resize(im, (fig_width, fig_height), anti_aliasing=True) plt.figure(dpi=150) plt.imshow(im) plt.axis('off') ###Output Loading model checkpoint experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 ... INFO:tensorflow:Restoring parameters from experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 Model loaded EPOCHS trained: 130 random sample batch ... Loading model checkpoint experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 ... INFO:tensorflow:Restoring parameters from experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 Model loaded EPOCHS trained: 130 random sample batch ... Loading model checkpoint experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 ... INFO:tensorflow:Restoring parameters from experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 Model loaded EPOCHS trained: 130 random sample batch ... Loading model checkpoint experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 ... INFO:tensorflow:Restoring parameters from experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 Model loaded EPOCHS trained: 130 random sample batch ... Loading model checkpoint experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 ... INFO:tensorflow:Restoring parameters from experiments/checkpoint_dir/BayAE__MNIST_lat15_h500_lay3_mc10/-113750 Model loaded EPOCHS trained: 130 random sample batch ...
dementia_optima/models/misc/data_kernel_sl_nr_ft_newfeaturevariable_maya_notAskedandnull_mmse_range_0_30.ipynb
###Markdown ------ **Dementia Patients -- Analysis and Prediction** ***Author : Akhilesh Vyas*** ****Date : August, 2019**** ***Result Plots*** - 0. Setup - 0.1. Load libraries - 0.2. Define paths - 1. Data Preparation - 1.1. Read Data - 1.2. Prepare data - 1.3. Prepare target - 1.4. Removing Unwanted Features - 2. Data Analysis - 2.1. Feature - 2.2. Target - 3. Data Preparation and Vector Transformation- 4. Analysis and Imputing Missing Values - 5. Feature Analysis - 5.1. Correlation Matrix - 5.2. Feature and target - 5.3. Feature Selection Models - 6.Machine Learning -Classification Model 0. Setup 0.1 Load libraries Loading Libraries ###Code import sys sys.path.insert(1, '../preprocessing/') import numpy as np import pickle import scipy.stats as spstats import matplotlib.pyplot as plt import seaborn as sns import pandas_profiling from sklearn.datasets.base import Bunch #from data_transformation_cls import FeatureTransform from ast import literal_eval import plotly.figure_factory as ff import plotly.offline as py import plotly.graph_objects as go import pandas as pd pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', -1) from ordered_set import OrderedSet %matplotlib inline ###Output /home/vyasa/pythonEnv/lib/python3.6/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.datasets.base module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.datasets. Anything that cannot be imported from sklearn.datasets is now part of the private API. warnings.warn(message, FutureWarning) ###Markdown 0.2 Define paths ###Code # data_path # !cp -r ../../../datalcdem/data/optima/dementia_18July/data_notasked/ ../../../datalcdem/data/optima/dementia_18July/data_notasked_mmse_0_30/ data_path = '../../../datalcdem/data/optima/dementia_18July/data_notasked_mmse_0_30/' result_path = '../../../datalcdem/data/optima/dementia_18July/data_notasked_mmse_0_30/results/' optima_path = '../../../datalcdem/data/optima/optima_excel/' ###Output _____no_output_____ ###Markdown 1. Data Preparation 1.1. Read Data ###Code #Preparation Features from Raw data # Patient Comorbidities data '''patient_com_raw_df = pd.read_csv(data_path + 'optima_patients_comorbidities.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Comorbidity_cui']] display(patient_com_raw_df.head(5)) patient_com_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_com_raw_df['EPISODE_DATE']) # Patient Treatment data patient_treat_raw_df = pd.read_csv(data_path + 'optima_patients_treatments.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Medication_cui']] display(patient_treat_raw_df.head(5)) patient_treat_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_treat_raw_df['EPISODE_DATE']) # Join Patient Treatment and Comorbidities data patient_com_treat_raw_df = pd.merge(patient_com_raw_df, patient_treat_raw_df,on=['patient_id', 'EPISODE_DATE'], how='outer') patient_com_treat_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True) patient_com_treat_raw_df.reset_index(drop=True, inplace=True) patient_com_treat_raw_df.head(5) #Saving data patient_com_treat_raw_df.to_csv(data_path + 'patient_com_treat_episode_df.csv', index=False)''' # Extracting selected features from Raw data def rename_columns(col_list): d = {} for i in col_list: if i=='GLOBAL_PATIENT_DB_ID': d[i]='patient_id' elif 'CAMDEX SCORES: ' in i: d[i]=i.replace('CAMDEX SCORES: ', '').replace(' ', '_') elif 'CAMDEX ADMINISTRATION 1-12: ' in i: d[i]=i.replace('CAMDEX ADMINISTRATION 1-12: ', '').replace(' ', '_') elif 'DIAGNOSIS 334-351: ' in i: d[i]=i.replace('DIAGNOSIS 334-351: ', '').replace(' ', '_') elif 'OPTIMA DIAGNOSES V 2010: ' in i: d[i]=i.replace('OPTIMA DIAGNOSES V 2010: ', '').replace(' ', '_') elif 'PM INFORMATION: ' in i: d[i]=i.replace('PM INFORMATION: ', '').replace(' ', '_') else: d[i]=i.replace(' ', '_') return d sel_col_df = pd.read_excel(data_path+'Variable_Guide_Highlighted_Fields_.xlsx') display(sel_col_df.head(5)) sel_cols = [i+j.replace('+', ':')for i,j in zip(sel_col_df['Sub Category '].tolist(), sel_col_df['Variable Label'].tolist())] rem_cols= ['OPTIMA DIAGNOSES V 2010: OTHER SYSTEMIC ILLNESS: COMMENT'] # Missing column in the dataset sel_cols = sorted(list(set(sel_cols)-set(rem_cols))) print (sel_cols) columns_selected = list(OrderedSet(['GLOBAL_PATIENT_DB_ID', 'EPISODE_DATE', 'CAMDEX SCORES: MINI MENTAL SCORE'] + sel_cols)) df_datarequest = pd.read_excel(optima_path+'Data_Request_Jan_2019_final.xlsx') display(df_datarequest.head(1)) df_datarequest_features = df_datarequest[columns_selected] display(df_datarequest_features.columns) columns_renamed = rename_columns(df_datarequest_features.columns.tolist()) df_datarequest_features.rename(columns=columns_renamed, inplace=True) display(df_datarequest_features.head(5)) # df_datarequest_features.drop(columns=['Age_At_Episode', 'PETERSEN_MCI_TYPE'], inplace=True) display(df_datarequest_features.head(5)) # drop columns having out of range MMSE value df_datarequest_features = df_datarequest_features[(df_datarequest_features['MINI_MENTAL_SCORE']<=30) & (df_datarequest_features['MINI_MENTAL_SCORE']>=0)] # Merging Join Patient Treatment, Comorbidities and selected features from raw data #patient_com_treat_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_com_treat_raw_df['EPISODE_DATE']) #patient_com_treat_fea_raw_df = pd.merge(patient_com_treat_raw_df,df_datarequest_features,on=['patient_id', 'EPISODE_DATE'], how='left') #patient_com_treat_fea_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True) #patient_com_treat_fea_raw_df.reset_index(inplace=True, drop=True) #display(patient_com_treat_fea_raw_df.head(5)) patient_com_treat_fea_raw_df = df_datarequest_features # Need to be changed ------------------------ # Filling misssing MMSE value with patient group Average #patient_com_treat_fea_raw_df['MINI_MENTAL_SCORE']\ # = patient_com_treat_fea_raw_df.groupby(by=['patient_id'])['MINI_MENTAL_SCORE'].transform(lambda x: x.fillna(x.mean())) display(patient_com_treat_fea_raw_df.head(5)) # 19<=Mild<=24 , 14<=Moderate<=18 , Severe<=13 #patient_com_treat_fea_raw_df['MINI_MENTAL_SCORE_CATEGORY']=np.nan def change_minimentalscore_to_category(df): df.loc[(df['MINI_MENTAL_SCORE']<=30) & (df['MINI_MENTAL_SCORE']>24),'MINI_MENTAL_SCORE_CATEGORY'] = 'Normal' df.loc[(df['MINI_MENTAL_SCORE']<=24) & (df['MINI_MENTAL_SCORE']>=19), 'MINI_MENTAL_SCORE_CATEGORY'] = 'Mild' df.loc[(df['MINI_MENTAL_SCORE']<=18) & (df['MINI_MENTAL_SCORE']>=14), 'MINI_MENTAL_SCORE_CATEGORY'] = 'Moderate' df.loc[(df['MINI_MENTAL_SCORE']<=13) & (df['MINI_MENTAL_SCORE']>=0),'MINI_MENTAL_SCORE_CATEGORY'] = 'Severe' return df #patient_com_treat_fea_raw_df = change_minimentalscore_to_category(patient_com_treat_fea_raw_df) # saving file patient_com_treat_fea_raw_df.to_csv(data_path + 'patient_com_treat_fea_episode_raw_without_expand_df.csv', index=False) # Set line number for treatment line def setLineNumber(lst): lst_dict = {ide:0 for ide in lst} lineNumber_list = [] for idx in lst: if idx in lst_dict: lst_dict[idx] = lst_dict[idx] + 1 lineNumber_list.append(lst_dict[idx]) return lineNumber_list patient_com_treat_fea_raw_df['lineNumber'] = setLineNumber(patient_com_treat_fea_raw_df['patient_id'].tolist()) display(patient_com_treat_fea_raw_df.head(5)) # Extend episode data into columns def extend_episode_data(df): id_dict = {i:0 for i in df['patient_id'].tolist()} for x in df['patient_id'].tolist(): if x in id_dict: id_dict[x]=id_dict[x]+1 line_updated = [int(j) for i in id_dict.values() for j in range(1,i+1)] # print (line_updated[0:10]) df.update(pd.Series(line_updated, name='lineNumber'),errors='ignore') print ('\n----------------After creating line-number for each patients------------------') display(df.head(20)) # merging episodes based on id and creating new columns for each episode r = df['lineNumber'].max() print ('Max line:',r) l = [df[df['lineNumber']==i] for i in range(1, int(r+1))] print('Number of Dfs to merge: ',len(l)) df_new = pd.DataFrame() tmp_id = [] for i, df_l in enumerate(l): df_l = df_l[~df_l['patient_id'].isin(tmp_id)] for j, df_ll in enumerate(l[i+1:]): #df_l = df_l.merge(df_ll, on='id', how='left', suffix=(str(j), str(j+1))) #suffixe is not working #print (j) df_l = df_l.join(df_ll.set_index('patient_id'), on='patient_id', rsuffix='_'+str(j+1)) tmp_id = tmp_id + df_l['patient_id'].tolist() #display(df_l) df_new = df_new.append(df_l, ignore_index=True, sort=False) return df_new patient_com_treat_fea_raw_df['lineNumber'] = setLineNumber(patient_com_treat_fea_raw_df['patient_id'].tolist()) # drop rows with duplicated episode for a patient patient_com_treat_fea_raw_df = patient_com_treat_fea_raw_df.drop_duplicates(subset=['patient_id', 'EPISODE_DATE']) patient_com_treat_fea_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'], inplace=True) columns = patient_com_treat_fea_raw_df.columns.tolist() patient_com_treat_fea_raw_df = patient_com_treat_fea_raw_df[columns[0:2]+columns[-1:] +columns[2:4]+columns[-2:-1] +columns[4:-2]] # Expand patient # patient_com_treat_fea_raw_df = extend_episode_data(patient_com_treat_fea_raw_df) patient_com_treat_fea_raw_df.drop(columns=['MINI_MENTAL_SCORE'], inplace=True) display(patient_com_treat_fea_raw_df.head(2)) # Saving extended episode of each patients # patient_com_treat_fea_raw_df.to_csv(data_path + 'patient_com_treat_fea_episode_raw_df.csv', index=False) patient_com_treat_fea_raw_df.shape display(patient_com_treat_fea_raw_df.describe(include='all')) display(patient_com_treat_fea_raw_df.info()) tmp_l = [] for i in range(len(patient_com_treat_fea_raw_df.index)) : # print("Nan in row ", i , " : " , patient_com_treat_fea_raw_df.iloc[i].isnull().sum()) tmp_l.append(patient_com_treat_fea_raw_df.iloc[i].isnull().sum()) plt.hist(tmp_l) plt.show() # find NAN and Notasked after filled value def findnotasked(v): #print(v) c = 0.0 flag = False try: for i in v: if float(i)<9.0 and float(i)>=0.0 and flag==False: #float(i)<9.0 and float(i)>=0.0: flag = True elif (float(i)==9.0 and flag==True): c = c+1 except: pass '''try: for i in v: if i!=9.0 or i!=i: #float(i)<9.0 and float(i)>=0.0: flag = True elif (float(i)==9.0 and flag==True): c = c+1 except: pass''' return c def findnan(v): #print(v) c = 0.0 flag = False try: for i in v: if float(i)<9.0 and float(i)>=0.0 and flag==False: #float(i)<9.0 and float(i)>=0.0: flag = True elif (float(i)!=float(i) and flag==True): c = c+1 except: pass '''try: for i in v: if i!=9.0 or i!=i: #float(i)<9.0 and float(i)>=0.0: flag = True elif (float(i)!=float(i) and flag==True): c = c+1 except: pass''' return c df = patient_com_treat_fea_raw_df[list( set([col for col in patient_com_treat_fea_raw_df.columns.tolist()]) -set(['EPISODE_DATE']))] tmpdf = pd.DataFrame(data=df['patient_id'].unique(), columns=['patient_id']) display(tmpdf.head(5)) for col in df.columns.tolist(): #print (col) tmp_df1 = df.groupby(by=['patient_id'])[col].apply(lambda x : findnotasked(x) ).reset_index(name='Count(notAsked)_'+col ) tmp_df2 = df.groupby(by=['patient_id'])[col].apply(lambda x : findnan(x) ).reset_index(name='Count(nan)_'+col ) #print (tmp_df1.isnull().sum().sum(), tmp_df2.isnull().sum().sum()) tmpdf = tmpdf.merge(tmp_df1, on=['patient_id'], how='inner') tmpdf = tmpdf.merge(tmp_df2, on=['patient_id'], how='inner') #print (tmpdf.columns.tolist()[-2]) # display(tmpdf) # display(tmpdf.agg(lambda x: x.sum(), axis=1)) col_notasked = [col for col in tmpdf.columns if 'Count(notAsked)_' in col] col_nan = [col for col in tmpdf.columns.tolist() if 'Count(nan)_' in col] tmpdf['count_Total(notasked)']=tmpdf[col_notasked].agg(lambda x: x.sum(),axis=1) tmpdf['count_Total(nan)']=tmpdf[col_nan].agg(lambda x: x.sum(),axis=1) display(tmpdf.head(5)) profile = tmpdf.profile_report(title='Dementia Profiling Report') profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked_nan.html") # find NAN and Notasked after filled value def findnotasked_full(v): #print(v) c = 0.0 try: for i in v: if float(i)==9.0: c = c+1 except: pass return c def findnan_full(v): c = 0.0 try: for i in v: if float(i)!=i: c = c+1 except: pass return c df = patient_com_treat_fea_raw_df[list( set([col for col in patient_com_treat_fea_raw_df.columns.tolist()]) -set(['EPISODE_DATE']))] tmpdf_full = pd.DataFrame(data=df['patient_id'].unique(), columns=['patient_id']) display(tmpdf_full.head(5)) for col in df.columns.tolist(): #print (col) tmp_df1_full = df.groupby(by=['patient_id'])[col].apply(lambda x : findnotasked_full(x) ).reset_index(name='Count(notAsked)_'+col ) tmp_df2_full = df.groupby(by=['patient_id'])[col].apply(lambda x : findnan_full(x) ).reset_index(name='Count(nan)_'+col ) #print (tmp_df1.isnull().sum().sum(), tmp_df2.isnull().sum().sum()) tmpdf_full = tmpdf_full.merge(tmp_df1_full, on=['patient_id'], how='inner') tmpdf_full = tmpdf_full.merge(tmp_df2_full, on=['patient_id'], how='inner') #print (tmpdf.columns.tolist()[-2]) #display(tmpdf) #display(tmpdf.agg(lambda x: x.sum(), axis=1)) col_notasked_full = [col for col in tmpdf_full.columns if 'Count(notAsked)_' in col] col_nan_full = [col for col in tmpdf_full.columns.tolist() if 'Count(nan)_' in col] tmpdf_full['count_Total(notasked)']=tmpdf_full[col_notasked].agg(lambda x: x.sum(),axis=1) tmpdf_full['count_Total(nan)']=tmpdf_full[col_nan].agg(lambda x: x.sum(),axis=1) display(tmpdf_full.head(5)) profile = tmpdf_full.profile_report(title='Dementia Profiling Report') profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked_nan_full.html") # profile = patient_com_treat_fea_raw_df.profile_report(title='Dementia Profiling Report', style={'full_width':True}) profile = patient_com_treat_fea_raw_df.profile_report(title='Dementia Profiling Report') profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked.html") #columnswise sum total_notasked_nan = tmpdf.sum(axis = 0, skipna = True) fl =[f.replace('Count(notAsked)_', '') if 'notAsked' in f else f.replace('Count(nan)_', '') for f in total_notasked_nan.index] l =['NotAsked' if 'notAsked' in f else 'Null' for f in total_notasked_nan.index] total_notasked_nan_df = pd.DataFrame(data={'Feature':total_notasked_nan.index, 'Value':total_notasked_nan, 'Type':l}) total_notasked_nan_df['Feature']=fl total_notasked_nan_df.to_csv(data_path+'total_notasked_nan.csv', index=True) total_notasked_nan_com = tmpdf_full.sum(axis = 0, skipna = True) fl_full =[f.replace('Count(notAsked)_', '') if 'notAsked' in f else f.replace('Count(nan)_', '') for f in total_notasked_nan_com.index] l_full =['NotAsked' if 'notAsked' in f else 'Null' for f in total_notasked_nan_com.index] total_notasked_nan_com_df = pd.DataFrame(data={'Feature':total_notasked_nan_com.index, 'Value':total_notasked_nan_com, 'Type':l_full}) total_notasked_nan_com_df['Feature']=fl_full total_notasked_nan_com_df.to_csv(data_path+'total_notasked_nan_com.csv', index=True) ###Output _____no_output_____
software/visualization/Block_Reaching_Visualization.ipynb
###Markdown ReachMaster Block Analysis Intended to use repeatedly on single trial blocks of video, kinematic, and experiment data. Items required to run: Experimental dataframe, Kinematic dataframe, block video of known date/session/rat The following code : imports and utilize experimental data to coarsely segment trial video blocks into fine trials. Additional blocks save the images and video for each trial into a folder the user generated name information for. The final blocks import the kinematic data for a given rat (the rat you are examining in the video ex RM16) and visualize data across blocks and for a given trial. ###Code ## import seaborn as sns from ReachViz import ReachViz block_video_file = '/Users/bassp/OneDrive/Desktop/Classification Project/2019-09-20-S1-RM14_cam2_DLC.mp4' #block_video_file = '/Users/bassp/OneDrive/Desktop/bwlabeled.mp4' kin_file = 'DataFrames/3D_positions_RM14.pkl' exp_datafile = 'DataFrames/RM14_expdf.pickle' date = '20' session = 'S1' rat = 'RM14' R = ReachViz(date,session,exp_datafile, block_video_file,kin_file, rat) #R.reach_splitter_threshold() R.vid_splitter_and_grapher() def loop_over_rats_and_extract_reaches(prediction_dataframe,e_dataframe, dummy_video_path, rat): save_path = '/Users/bassp/OneDrive/Desktop/Classification Project/reach_thresholds_RM15/' # Get rat, date, session for each block we need to process. k_dataframe = pd.read_pickle(prediction_dataframe) #pdb.set_trace() for kk in k_dataframe: session = kk.columns[2][1] date = kk.columns[2][0][2:4] print(session,date) R = ReachViz(datez,session,e_dataframe, dummy_video_path,prediction_dataframe, rat) reaching, mask, bout = R.reach_splitter_threshold(save_dir=save_path) return #loop_over_rats_and_extract_reaches(kin_file,exp_datafile,block_video_file, rat) # Not all camera angles are created equal : Let's use a weight of camera 1, 2, and 3 to create a metric for or against interpolation # Apply mean filter for segmentation (we can't let just any value's be looked at, general domain boundary...) # Create # Post-segmentation apply more coordinated filtering eg: compare trajectories etc for trials, data-driven analysis. # Detect single-reach, double- reach, or "multi-reach" import imageio import pdb images = [] writer = imageio.get_writer('bw.mp4', fps=30) reader = imageio.get_reader(block_video_file) for img in reader.iter_data(): writer.append_data(img[:,:,1]) # "L" channel writer.close() def rz_length(rz_array,time_array): ### Takes in matched length Reward Zone and Normalized Time arrays (m_times) ### mask=np.zeros(len(time_array)) for ix,ct in enumerate(rz_array): if ct == 1: mask[ix]=1 # get rough amt of time spent in reward zone w/ # of exposures # exposures are variable, but only to 4th float place so we are ok (ms) rz_len = np. count_nonzero(mask == 1) flip=0 for ji,jn in enumerate(mask): try: if jn == 1: if mask[ji-1]==0: if mask[ji+1]==1: if mask[ji-20]==0: flip+=1 except: flip=flip return rz_len,flip def trial_length(start,stop,sf): t_length = [] reward_region = [] for i,x in enumerate(start): if i in sf: t_length.append(stop[i]-x) reward_region.append(stop[i]+200-x+50) return t_length,reward_region ###Output _____no_output_____
agile_estimation/agile_estimation_3.ipynb
###Markdown Agile Estimation: missed opportunity and missed deadlines Why the companies care when their development projects will be completed? Obviously, to get some benefit from the project sooner. It may be increase of sales or profit, or reduction of cost. We call is missed opportunity, and it has a cost in dollars. Calculating missed opportunity is easier and more straightforward and, what is more important, much less misleading, than calculating ROI. Just think about how many times the actual ROI from the software project was several orders of magnitude less than the projected? Using missed opportunity calculation also helps you prioritize the projects.In this notebook we will try to estimate the probability distribution of the missed opportunity of a single project based on the number of story points the team can complete in one iteration. As discussed in [the previous notebook](agile_estimation_2.ipynb), we will use Log-Normal distribution to estimate project velocity. ###Code import numpy as np from scipy.stats import lognorm data=np.array([14, 12, 7, 14, 13]) shape, loc, scale = lognorm.fit(data, floc=0) ###Output _____no_output_____ ###Markdown Here we took the information about the past iteration of the team (14, 12, 7, 14, 13 story points respectively) and fitted it to the log-normal distribution. We are interested the question: How many iterations will a given number of story point(in this example 70) take. Again, we use the wonderful property of the log-normal distribution, that the inverse is also log-normal with the same parameter $\sigma$ and inversed parameter $-\mu$ ###Code num_points = 70 dist_iterations = lognorm(shape, loc, num_points/scale) print(f'Mean: {dist_iterations.mean()}') print(f'Median {dist_iterations.median()}') print(f'Standard deviation {dist_iterations.std()}') #We plot the distribution %matplotlib inline import matplotlib.pyplot as plt def plot_dist(frozen, low=0, high=14): fig, ax = plt.subplots(1, 1) x = np.linspace(low, high, 100) ax.plot(x, frozen.pdf(x), 'r-', lw=5, alpha=0.6, label='lognorm pdf') plot_dist(dist_iterations); ###Output _____no_output_____ ###Markdown So we see that we have a good chance to complete it within 7 iterations, but there is a chance it may take up to 12 iterations! Let's say the business is losing $10,000 per iteration as missed opportunity. Then the distribution of missed opportunity will be the following: ###Code missed_opportunity_per_iteration = 10000 missed_opportunity = lognorm(shape, loc, num_points/scale*missed_opportunity_per_iteration) print(f'Mean: {missed_opportunity.mean()}') print(f'Median {missed_opportunity.median()}') print(f'Standard deviation {missed_opportunity.std()}') plot_dist(missed_opportunity, 0, 140000); ###Output Mean: 62196.382850879614 Median 60117.768482548934 Standard deviation 16496.33974043927 ###Markdown As we see, we have all incentives to complete the project sooner to move the curve to the left. Maybe we add more developers to increase velocity? We may also want to reduce scope to reduce the number of story points.Finally, despite what some Agile theorists say, the business sets deadline for a reason. When a software project is done, the business has to do UAT, reach out to some of the customers and ask them to provide feedback, etc. The business would also like to plan this in advance, and since the closure activities have a fixed cost, if the project is not delivered on time, this will add to the project cost. We call it cost of delay.If the missed opportunity cost is zero, then to avoid the cost of delay we plan the closure activities to as late as possible. But if it is non-zero, then there will be trade-off between two costs. So if $C$ is a closure cost, $C_o$ is the missed opportunity cost, $N$ is the actual number of iterations and $M$ is the number if iterations planned, then the total cost will be:$$ T_c = M C_o + C P(N > M) $$We need to minimize this cost over $M$ We can take a derivative with respect to M. Note, that $P(N > M)$ is what is called survival function, or $1 - CDF$, where CDF is the cumulative density function. The derivative of the survival function is negative probability density function. Thus the optimal value of M is defined by the equation:$$ C_o - C p(M) = 0 $$In this example we guess that the delay cost is $95,000 ###Code #We solve the equation numerically: from scipy.optimize import * delay_cost = 95000 def to_optimize(m): return missed_opportunity_per_iteration - delay_cost*dist_iterations.pdf(m) roots = newton_krylov(to_optimize, 8.0) float(roots) ###Output _____no_output_____
docs/getting-started.ipynb
###Markdown In this tutorial we will explore the basic features of **Elegy**. If you are a Keras user you should feel at home, if you are currently using Jax or Haiku things will appear much more streamlined. To get started you will first need to install the following dependencies: ###Code ! pip install elegy dataget matplotlib ###Output _____no_output_____ ###Markdown Note that Elegy doesn't depend on `jax` since there are both `cpu` and `gpu` version you can choose from so you will need to install it separately. Loading the DataIn this tutorial we will train a Neural Network on the MNIST dataset, for this we will first need to download and load the data into memory. Here we will use `dataget` for simplicity but you can use you favorite datasets library. ###Code import dataget X_train, y_train, X_test, y_test = dataget.image.mnist(global_cache=True).get() print("X_train:", X_train.shape, X_train.dtype) print("y_train:", y_train.shape, y_train.dtype) print("X_test:", X_test.shape, X_test.dtype) print("y_test:", y_test.shape, y_test.dtype) ###Output X_train: (60000, 28, 28) uint8 y_train: (60000,) uint8 X_test: (10000, 28, 28) uint8 y_test: (10000,) uint8 ###Markdown In this case `dataget` loads the data from Yann LeCun's website. Creating the ModelNow that we have the data we can define our model. In Elegy you can do this by inheriting from `elegy.Module` and defining a `call` method. This method should take in some inputs, perform a series of transformation using Jax and Haiku expressions, and returns the outputs of the network. In this example we will create a simple 2 layer MLP using Haiku modules: ###Code import jax.numpy as jnp import jax import haiku as hk import elegy class MLP(elegy.Module): """Standard LeNet-300-100 MLP network.""" def __init__(self, n1: int = 300, n2: int = 100, **kwargs): super().__init__(**kwargs) self.n1 = n1 self.n2 = n2 def call(self, image: jnp.ndarray) -> jnp.ndarray: image = image.astype(jnp.float32) / 255.0 mlp = hk.Sequential( [ hk.Flatten(), hk.Linear(self.n1), jax.nn.relu, hk.Linear(self.n2), jax.nn.relu, hk.Linear(10), ] ) return mlp(image) ###Output _____no_output_____ ###Markdown Here we are using `Sequential` to stack two layers with `relu` activations and a final `Linear` layer with `10` units that represents the logits of the network. This code should feel familiar to most Keras / PyTorch users, the main difference here is that instead of assigning layers / modules as fields inside `__init__` and later using them in `call` / `forward`, here we can just use them inplace since Haiku tracks the state for us "behind the scenes". Writing model code in Elegy / Haiku often feels easier since there tends to be a lot less boilerplate thanks to Haiku hooks. For a premier on Haiku please refer to this [Quick Start](https://github.com/deepmind/dm-haikuquickstart).**Note** `elegy.Module` is just a thin wrapper over `haiku.Module` that adds certain Elegy-related functionalities, you can inherit from from `haiku.Module` instead if you wish, just remember to also rename `call` to `__call__`Now that we have this module we can create an Elegy `Model`. ###Code from jax.experimental import optix model = elegy.Model( module=lambda: MLP(n1=300, n2=100), loss=[ elegy.losses.SparseCategoricalCrossentropy(from_logits=True), elegy.regularizers.GlobalL2(l=1e-4), ], metrics=lambda: elegy.metrics.SparseCategoricalAccuracy(), optimizer=optix.rmsprop(1e-3), ) ###Output _____no_output_____ ###Markdown Much like `keras.Model`, an Elegy Model is tasked with performing training, evalaution, and inference. The constructor of this class accepts most of the arguments accepted by `keras.Model.compile` as you might have seen but there are some notable differences:1. It requires you to pass a `module` as first argument.2. Loss can be a list even if we don't have multiple corresponding outputs/labels, this is because Elegy exposes a more flexible system for defining losses and metrics based on Dependency Injection.You might have notice some weird `lambda` expressions around `module` and `metrics`, these arise because Haiku prohibits the creation of `haiku.Module`s outside of a `haiku.transform`. To go around this restriction we just defer instantiation of these object by wrapping them inside a `lambda` and calling them later. For convenience both the `elegy.Module` and `elegy.Metric` classes define a `defer` classmethod that which you can use to make things more readable: ###Code model = elegy.Model( module=MLP.defer(n1=300, n2=100), loss=[ elegy.losses.SparseCategoricalCrossentropy(from_logits=True), elegy.regularizers.GlobalL2(l=1e-4), ], metrics=elegy.metrics.SparseCategoricalAccuracy.defer(), optimizer=optix.rmsprop(1e-3), ) ###Output _____no_output_____ ###Markdown Training the ModelHaving our `model` instance ready we now need to pass it some data to start training. Like in Keras this is done via the `fit` method which contains more or less the same signature. We try to be as compatible with Keras as possible here but also remove a lot of the Tensorflow specific stuff. The following code will train our model for `100` epochs while limiting each epoch to `200` steps and using a batch size of `64`: ###Code history = model.fit( x=X_train, y=y_train, epochs=100, steps_per_epoch=200, batch_size=64, validation_data=(X_test, y_test), shuffle=True, callbacks=[elegy.callbacks.ModelCheckpoint("model", save_best_only=True)], ) ###Output _____no_output_____ ###Markdown ```...Epoch 99/100200/200 [==============================] - 1s 5ms/step - l2_regularization_loss: 0.0094 - loss: 0.0105 - sparse_categorical_accuracy: 0.9958 - sparse_categorical_crossentropy_loss: 0.0011 - val_l2_regularization_loss: 0.0094 - val_loss: 0.0094 - val_sparse_categorical_accuracy: 0.9813 - val_sparse_categorical_crossentropy_loss: 7.4506e-09Epoch 100/100200/200 [==============================] - 1s 5ms/step - l2_regularization_loss: 0.0094 - loss: 0.0271 - sparse_categorical_accuracy: 0.9966 - sparse_categorical_crossentropy_loss: 0.0177 - val_l2_regularization_loss: 0.0094 - val_loss: 0.0094 - val_sparse_categorical_accuracy: 0.9806 - val_sparse_categorical_crossentropy_loss: 4.4703e-08 ```We've ported Keras beloved progress bar and also implemented its `Callback` and `History` APIs. `fit` returns a `history` object which we will use next to visualize how the metrics and losses evolved during training. ###Code import matplotlib.pyplot as plt def plot_history(history): n_plots = len(history.history.keys()) // 2 plt.figure(figsize=(14, 24)) for i, key in enumerate(list(history.history.keys())[:n_plots]): metric = history.history[key] val_metric = history.history[f"val_{key}"] plt.subplot(n_plots, 1, i + 1) plt.plot(metric, label=f"Training {key}") plt.plot(val_metric, label=f"Validation {key}") plt.legend(loc="lower right") plt.ylabel(key) plt.title(f"Training and Validation {key}") plt.show() plot_history(history) ###Output _____no_output_____ ###Markdown Doing InferenceHaving our trained model we can now get some samples from the test set and generate some predictions. First we will just pick some random samples using `numpy`: ###Code import numpy as np idxs = np.random.randint(0, 10000, size=(9,)) x_sample = X_test[idxs] ###Output _____no_output_____ ###Markdown Here we selected `9` random images. Now we can use the `predict` method to get their labels: ###Code y_pred = model.predict(x=x_sample) ###Output _____no_output_____ ###Markdown Easy right? Finally lets plot the results to see if they are accurate. ###Code plt.figure(figsize=(12, 12)) for i in range(3): for j in range(3): k = 3 * i + j plt.subplot(3, 3, k + 1) plt.title(f"{np.argmax(y_pred[k])}") plt.imshow(x_sample[k], cmap="gray") ###Output _____no_output_____ ###Markdown Perfect! Loading Saved ModelSince we used `elegy.callbacks.ModelCheckpoint` we can always restore our model from disk in the future. ###Code try: # current model reference print("current model reference:", model) model = elegy.model.load("model") except: print( "Could not load model, this is pobably due to a bug in `cloudpickle " "on certain python versions. For better results try Python >= 3.8. " "An alternative way to load the model is to manually build the model from " "the source code and use `model.load('model')` which will only load the weights + state." ) model.load("model") # new model reference print("new model reference: ", model) # check that it works! model.predict(x=x_sample).shape ###Output current model reference: <elegy.model.Model object at 0x7fd8beee4ac8> new model reference: <elegy.model.Model object at 0x7fd8a353e358> ###Markdown In this tutorial we will explore the basic features of **Elegy**. If you are a Keras user you should feel at home, if you are currently using Jax things will appear much more streamlined. To get started you will first need to install the following dependencies: ###Code ! pip install --upgrade pip ! pip install elegy dataget matplotlib ! pip install jax==0.1.75 jaxlib==0.1.52 # for CPU only # For GPU install proper version of your CUDA, following will work in COLAB: # ! pip install --upgrade jax==0.1.75 jaxlib==0.1.52+cuda101 -f https://storage.googleapis.com/jax-releases/jax_releases.html ###Output _____no_output_____ ###Markdown Note that Elegy doesn't depend on `jax` since there are both `cpu` and `gpu` versions you can choose from, the previous block will install `jax-cpu`, if you want jax to run on gpu you will need to [install it](https://github.com/google/jaxinstallation) separately. If you are running this example on Colab you need to uncomment the part which installs the GPU version suitable for Colab. Loading the DataIn this tutorial we will train a Neural Network on the MNIST dataset, for this we will first need to download and load the data into memory. Here we will use `dataget` for simplicity but you can use you favorite datasets library. ###Code import dataget X_train, y_train, X_test, y_test = dataget.image.mnist(global_cache=True).get() print("X_train:", X_train.shape, X_train.dtype) print("y_train:", y_train.shape, y_train.dtype) print("X_test:", X_test.shape, X_test.dtype) print("y_test:", y_test.shape, y_test.dtype) ###Output X_train: (60000, 28, 28) uint8 y_train: (60000,) uint8 X_test: (10000, 28, 28) uint8 y_test: (10000,) uint8 ###Markdown In this case `dataget` loads the data from Yann LeCun's website. Defining the ArchitectureNow that we have the data we can define our model. In Elegy you can do this by inheriting from `elegy.Module` and defining a `call` method. This method should take in some inputs, perform a series of transformation using Jax, and returns the outputs of the network. In this example we will create a simple 2 layer MLP using Elegy modules: ###Code import jax.numpy as jnp import jax import elegy class MLP(elegy.Module): """Standard LeNet-300-100 MLP network.""" def __init__(self, n1: int = 300, n2: int = 100, **kwargs): super().__init__(**kwargs) self.n1 = n1 self.n2 = n2 def call(self, image: jnp.ndarray) -> jnp.ndarray: image = image.astype(jnp.float32) / 255.0 mlp = elegy.nn.sequential( elegy.nn.Flatten(), elegy.nn.Linear(self.n1), jax.nn.relu, elegy.nn.Linear(self.n2), jax.nn.relu, elegy.nn.Linear(10), ) return mlp(image) ###Output _____no_output_____ ###Markdown Here we are using `sequential` to stack two layers with `relu` activations and a final `Linear` layer with `10` units that represents the logits of the network. This code should feel familiar to most Keras / PyTorch users. The main difference here is that thanks to elegy's [hooks system](https://poets-ai.github.io/elegy/guides/module-system/) you can (uncoditionally) declare modules, parameters, and states right in your `call` (forward) function without having to explicitly assign them to properties. This tends to produce much more readable code and reduce boilerplate. Creating the ModelNow that we have this module we can create an Elegy `Model`. ###Code import optax model = elegy.Model( module=MLP(n1=300, n2=100), loss=[ elegy.losses.SparseCategoricalCrossentropy(from_logits=True), elegy.regularizers.GlobalL2(l=1e-4), ], metrics=elegy.metrics.SparseCategoricalAccuracy(), optimizer=optax.adam(1e-3), ) ###Output _____no_output_____ ###Markdown Much like `keras.Model`, an Elegy Model is tasked with performing training, evaluation, and inference. The constructor of this class accepts most of the arguments accepted by `keras.Model.compile` as you might have seen but there are some notable differences:1. It requires you to pass a `module` as first argument.2. `loss` can be a list even if we don't have multiple corresponding outputs/labels, this is because Elegy exposes a more [flexible system](https://poets-ai.github.io/elegy/guides/modules-losses-metrics/) for defining losses and metrics based on Dependency Injection. As in Keras, you can get a rich description of the model by calling `Model.summary` with a sample input: ###Code model.summary(X_train[:64]) ###Output ╒═════════════════════╤═══════════════════════╤═════════════════════╤═════════════════╕ │ Layer │ Outputs Shape │ Trainable │ Non-trainable │ │ │ │ Parameters │ Parameters │ ╞═════════════════════╪═══════════════════════╪═════════════════════╪═════════════════╡ │ Inputs │ (64, 28, 28) uint8 │ 0 │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ flatten (Flatten) │ (64, 784) float32 │ 0 │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ linear (Linear) │ (64, 300) float32 │ 235,500 942.0 KB │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ relu │ (64, 300) float32 │ 0 │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ linear_1 (Linear) │ (64, 100) float32 │ 30,100 120.4 KB │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ relu_1 │ (64, 100) float32 │ 0 │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ linear_2 (Linear) │ (64, 10) float32 │ 1,010 4.0 KB │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ Outputs (MLP) │ (64, 10) float32 │ 0 │ 0 │ ╘═════════════════════╧═══════════════════════╧═════════════════════╧═════════════════╛ Total Parameters: 266,610 1.1 MB Trainable Parameters: 266,610 1.1 MB Non-trainable Parameters: 0 ###Markdown Training the ModelHaving our `model` instance we are now ready to pass it some data to start training. Like in Keras this is done via the `fit` method which contains more or less the same signature. We try to be as compatible with Keras as possible but also remove a lot of the Tensorflow specific stuff. The following code will train our model for `100` epochs while limiting each epoch to `200` steps and using a batch size of `64`: ###Code history = model.fit( x=X_train, y=y_train, epochs=100, steps_per_epoch=200, batch_size=64, validation_data=(X_test, y_test), shuffle=True, callbacks=[elegy.callbacks.ModelCheckpoint("model", save_best_only=True)], ) ###Output _____no_output_____ ###Markdown ```...Epoch 99/100200/200 [==============================] - 1s 4ms/step - l2_regularization_loss: 0.0452 - loss: 0.0662 - sparse_categorical_accuracy: 0.9928 - sparse_categorical_crossentropy_loss: 0.0210 - val_l2_regularization_loss: 0.0451 - val_loss: 0.1259 - val_sparse_categorical_accuracy: 0.9766 - val_sparse_categorical_crossentropy_loss: 0.0808Epoch 100/100200/200 [==============================] - 1s 4ms/step - l2_regularization_loss: 0.0450 - loss: 0.0610 - sparse_categorical_accuracy: 0.9953 - sparse_categorical_crossentropy_loss: 0.0161 - val_l2_regularization_loss: 0.0447 - val_loss: 0.1093 - val_sparse_categorical_accuracy: 0.9795 - val_sparse_categorical_crossentropy_loss: 0.0646 ```As you see we've ported Keras progress bar and also implemented its `Callback` and `History` APIs. `fit` returns a `history` object which we will use next to visualize how the metrics and losses evolved during training. ###Code import matplotlib.pyplot as plt def plot_history(history): n_plots = len(history.history.keys()) // 2 plt.figure(figsize=(14, 24)) for i, key in enumerate(list(history.history.keys())[:n_plots]): metric = history.history[key] val_metric = history.history[f"val_{key}"] plt.subplot(n_plots, 1, i + 1) plt.plot(metric, label=f"Training {key}") plt.plot(val_metric, label=f"Validation {key}") plt.legend(loc="lower right") plt.ylabel(key) plt.title(f"Training and Validation {key}") plt.show() plot_history(history) ###Output _____no_output_____ ###Markdown Generating PredictionsHaving our trained model we can now get some samples from the test set and generate some predictions. First we will just pick some random samples using `numpy`: ###Code import numpy as np idxs = np.random.randint(0, 10000, size=(9,)) x_sample = X_test[idxs] ###Output _____no_output_____ ###Markdown Here we selected `9` random images. Now we can use the `predict` method to get their labels: ###Code y_pred = model.predict(x=x_sample) ###Output _____no_output_____ ###Markdown Easy right? Finally lets plot the results to see if they are accurate. ###Code plt.figure(figsize=(12, 12)) for i in range(3): for j in range(3): k = 3 * i + j plt.subplot(3, 3, k + 1) plt.title(f"{np.argmax(y_pred[k])}") plt.imshow(x_sample[k], cmap="gray") ###Output _____no_output_____ ###Markdown Perfect! SerializationTo serialize the `Model` you can just use the `model.save(...)` method, this will create a folder with some files that contain the model's code plus all parameters and states. However, here we don't need to do that since we previously added the `elegy.callbacks.ModelCheckpoint` callback on `fit` which periodically does this for us during training. We configured `ModelCheckpoint` to save our model to a folder called `"model"` so we can just load it from there using `elegy.model.load`. Lets get a new model reference containing the same weights and call its `evaluate` method to verify everything loaded correctly: ###Code # current model reference print("current model id:", id(model)) # load model from disk model = elegy.model.load("model") # new model reference print("new model id: ", id(model)) # check that it works! model.evaluate(x=X_test, y=y_test) ###Output current model id: 140137340602160 new model id: 140136071352432 ###Markdown Getting started Installation [`frds`](/) requires Python3.8 or higher. Install via `PyPI`Using `pip` is the simplest way. To install using `pip`:```bashpip install frds --upgrade``` Install from source[`frds`](https://github.com/mgao6767/frds/) is available on GitHub. To download the source code and install:```bashcd ~git clone https://github.com/mgao6767/frds.gitcd frdspip install .``` SetupBy default, a folder named `frds` will be created under the user's home directory to store downloaded data. `frds` primarily uses WRDS to obtain data, so it requires WRDS login credentials. The setup is done by `frds.io.wrds.setup`: ###Code from frds.io.wrds import setup ###Output _____no_output_____ ###Markdown If `save_credentials=True`, the username and password will be saved locally in `credentials.json` in the `frds` folder. Then in later uses, no more setup is required (no just current session). ###Code setup(username='username', password='password', save_credentials=False) ###Output _____no_output_____ ###Markdown Usage Compute metrics on the goA typical example of how to use `frds` in a few lines. ###Code import pandas as pd from frds.data.wrds.comp import Funda from frds.io.wrds import load FUNDA = load(Funda, use_cache=True, obs=100) (FUNDA.PPENT / FUNDA.AT).to_frame("Tangibility") ###Output _____no_output_____ ###Markdown Buil-in measuresThe real time-saver is the built-in measures in `frds.measures`. ###Code from frds.measures.corporate import roa (roa_v1 := roa(FUNDA).to_frame("ROA: IB/AT")) ###Output _____no_output_____ ###Markdown Some measures may have variants. For example, ROA can be either computed as the income before extraordinary items scaled by contemporaneous total assets, or by lagged total assets. ###Code (roa_v2 := roa(FUNDA, use_lagged_total_asset=True).to_frame("ROA: IB/lagged AT")) ###Output _____no_output_____ ###Markdown We could then perform some easy analyses on the computed metrics. ###Code # Pearson correlation roa_v1.join(roa_v2).corr() ###Output _____no_output_____ ###Markdown Jupyter-flex allows you to create dashboards based on Jupyter Notebooks based on two simple concepts:1. Control the layout of the dashboard using markdown headers (``, `` and `)2. Define the dashboard components using Jupyter Notebook cell tags (`body` and others) Your first dashboardLet's take a very simple Jupyter Notebook with one plot and make a dashboard.The notebook is: ###Code import numpy as np import pandas as pd import altair as alt from vega_datasets import data alt.renderers.set_embed_options(actions=False) np.random.seed(42) source = data.cars() plot = alt.Chart(source).mark_circle(size=60).encode( x='Horsepower', y='Miles_per_Gallon', color='Origin', tooltip=['Name', 'Origin', 'Horsepower', 'Miles_per_Gallon'] ) plot ###Output _____no_output_____ ###Markdown All you need to do to convert this to a dashboard is to add a `body` tag to the that cell that has the plot as the output. How to view and add tags to cells in Jupyter Lab You can find a tag editor by clicking the gears icon on the top right icon on the right sidebar How to view and add tags to cells in Jupyter Classic Notebook In the top navigation go to View > Cell Toolbar > Tags Then type "body" in the new input of target cell and click on "Add tag" or press enter Responsive plots Depending on the plotting library you use you might need to add a bit of code to make the plot occupy all the space of the card. See the plotting page for more info. Converting the Notebook to an HTML fileThere are a couple of options to convert the notebook to a html dashboard.1. Execute the notebook as you normaly do in the Jupyter Notebook UI and then select: `File > Download as > Flex Dashboard (.html)`:![Jupyter-flex Download As](/assets/img/getting-started/download-as.png)2. You can go in a terminal and run `nbconvert`:```shell$ jupyter nbconvert --to flex notebook.ipynb```Optionally add the `--execute` flag to execute the notebook before converting them to a dashbaord.```shell$ jupyter nbconvert --to flex notebook.ipynb --execute```Open the resulting `.html` file in a browser and the result will be:[![](/assets/img/screenshots/jupyter_flex.tests.test_examples/docs_1-one-plot-reference.png)](/examples/1-one-plot.html)Click on the image to open dashboardYou might notice that the default title of the dashboard is the name of the notebook file, you can customize these using [parameters](parameters-orientation-and-title). Cards: Multiple outputsA Card is an object that holds one or more Cells. Cells can be markdown or code cells with outputs such as plots, text and widgets.You define a new Card by adding a level-3 markdown header (``).Any output from a tagged Cell will be added to the current Card until a new Card, Section or Page is defined.Going back to the notebook example we can add a new plot to the by adding two new cells:1. One markdown cell with a level-3 markdown header (``)2. One code cell with the `body` tag ###Code ### Second plot source = data.stocks() plot = alt.Chart(source).mark_area( color="lightblue", interpolate='step-after', line=True ).encode( x='date', y='price' ).transform_filter(alt.datum.symbol == 'GOOG') plot ###Output _____no_output_____ ###Markdown [![](/assets/img/screenshots/jupyter_flex.tests.test_examples/docs_2-two-plots-reference.png)](/examples/2-two-plots.html)Click on the image to open dashboardYou will notice two things:1. The default layout is a single column with cards stacked vertically and sized to fill available browser height.2. The value of the level-3 markdown header is added as the Card title Sections: Multiple columnsTo add another column to the dashboard define a new Section using a level 2 markdown header (``)In this case, the value of the header is irrelevant (it wont be shown on the dashboard) it just acts as an indicator to create a new Section. ###Code ## Column source = data.iris() plot = alt.Chart(source).mark_circle().encode( alt.X('sepalLength', scale=alt.Scale(zero=False)), alt.Y('sepalWidth', scale=alt.Scale(zero=False, padding=1)), color='species', size='petalWidth' ) plot ###Output _____no_output_____ ###Markdown In this case the result would be:[![](/assets/img/screenshots/jupyter_flex.tests.test_examples/docs_3-two-columns-reference.png)](/examples/3-two-columns.html)Click on the image to open dashboardYou will notice another default orientation: to have multiple Sections as columns. Parameters: Orientation and titleYou can control the parameters of the dashboard such as title, orientation and more by adding a `parameters` tag to a code.Let's add a title of `My first Flex dashboard` and change the orientation of the sections to `rows` ###Code flex_title = "My first Flex dashboard" flex_orientation = "rows" ###Output _____no_output_____ ###Markdown In this tutorial we will explore the basic features of **Elegy**. If you are a Keras user you should feel at home, if you are currently using Jax or Haiku things will appear much more streamlined. To get started you will first need to install the following dependencies: ###Code ! pip install elegy dataget matplotlib ###Output _____no_output_____ ###Markdown Note that Elegy doesn't depend on `jax` since there are both `cpu` and `gpu` version you can choose from so you will need to install it separately. Loading the DataIn this tutorial we will train a Neural Network on the MNIST dataset, for this we will first need to download and load the data into memory. Here we will use `dataget` for simplicity but you can use you favorite datasets library. ###Code import dataget X_train, y_train, X_test, y_test = dataget.image.mnist(global_cache=True).get() print("X_train:", X_train.shape, X_train.dtype) print("y_train:", y_train.shape, y_train.dtype) print("X_test:", X_test.shape, X_test.dtype) print("y_test:", y_test.shape, y_test.dtype) ###Output X_train: (60000, 28, 28) uint8 y_train: (60000,) uint8 X_test: (10000, 28, 28) uint8 y_test: (10000,) uint8 ###Markdown In this case `dataget` loads the data from Yann LeCun's website. Defining the ArchitectureNow that we have the data we can define our model. In Elegy you can do this by inheriting from `elegy.Module` and defining a `__apply__` method. This method should take in some inputs, perform a series of transformation using Jax and Haiku expressions, and returns the outputs of the network. In this example we will create a simple 2 layer MLP using Haiku modules: ###Code import jax.numpy as jnp import jax import haiku as hk import elegy class MLP(elegy.Module): """Standard LeNet-300-100 MLP network.""" def __init__(self, n1: int = 300, n2: int = 100, **kwargs): super().__init__(**kwargs) self.n1 = n1 self.n2 = n2 def __apply__(self, image: jnp.ndarray) -> jnp.ndarray: image = image.astype(jnp.float32) / 255.0 mlp = hk.Sequential( [ elegy.nn.Flatten(), elegy.nn.Linear(self.n1), jax.nn.relu, elegy.nn.Linear(self.n2), jax.nn.relu, elegy.nn.Linear(10), ] ) return mlp(image) ###Output _____no_output_____ ###Markdown Here we are using `Sequential` to stack two layers with `relu` activations and a final `Linear` layer with `10` units that represents the logits of the network. This code should feel familiar to most Keras / PyTorch users, the main difference here is that instead of assigning layers / modules as fields inside `__init__` and later using them in `call` / `forward`, here we can just use them inplace since Haiku tracks the state for us "behind the scenes". Writing model code in Elegy / Haiku often feels easier since there tends to be a lot less boilerplate thanks to Haiku hooks. For a premier on Haiku please refer to this [Quick Start](https://github.com/deepmind/dm-haikuquickstart).An `elegy.Module` is just a thin wrapper over `haiku.Module`, you can inherit from from `haiku.Module` instead if you wish but you will loose certain funcionalities when using Elegy. The Modules found in `elegy.nn` generally just wrap the one's found in Haiku, they add Elegy-related functionalities such as keeping track of each layers outputs for collecting summaries, adding default arguments usually provided by Keras, or taking advantage of custom Elegy hooks such as `elegy.add_loss` to e.g. enable per layer regularization strategies. Creating the ModelNow that we have this module we can create an Elegy `Model`. ###Code from jax.experimental import optix model = elegy.Model( module=MLP.defer(n1=300, n2=100), loss=[ elegy.losses.SparseCategoricalCrossentropy(from_logits=True), elegy.regularizers.GlobalL2(l=1e-4), ], metrics=elegy.metrics.SparseCategoricalAccuracy.defer(), optimizer=optix.adam(1e-3), ) ###Output _____no_output_____ ###Markdown Much like `keras.Model`, an Elegy Model is tasked with performing training, evalaution, and inference. The constructor of this class accepts most of the arguments accepted by `keras.Model.compile` as you might have seen but there are some notable differences:1. It requires you to pass a `module` as first argument.2. Loss can be a list even if we don't have multiple corresponding outputs/labels, this is because Elegy exposes a more flexible system for defining losses and metrics based on Dependency Injection.You might have notice some weird `defer` expressions around `module` and `metrics`, these arise because Haiku prohibits the creation of `haiku.Module`s outside of a `haiku.transform`. To go around this restriction we just defer instantiation of these object by wrapping them inside a `Defered` object and calling them in the correct context. The previous is roughly equivalent to: ###Code _model = elegy.Model( module=lambda x: MLP(n1=300, n2=100)(x), loss=[ elegy.losses.SparseCategoricalCrossentropy(from_logits=True), elegy.regularizers.GlobalL2(l=1e-4), ], metrics=lambda y_true, y_pred: elegy.metrics.SparseCategoricalAccuracy()( y_true, y_pred ), optimizer=optix.adam(1e-3), ) ###Output _____no_output_____ ###Markdown You can get a summary of the models architecture by calling `Model.summary` with a sample input: ###Code model.summary(X_train[:64]) ###Output ╒═════════════════════════╤═══════════════════════╤═════════════════════╤═════════════════╕ │ Layer │ Outputs Shape │ Trainable │ Non-trainable │ │ │ │ Parameters │ Parameters │ ╞═════════════════════════╪═══════════════════════╪═════════════════════╪═════════════════╡ │ Inputs │ (64, 28, 28) uint8 │ 0 │ 0 │ ├─────────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ mlp/flatten (Flatten) │ (64, 784) float32 │ 0 │ 0 │ ├─────────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ mlp/linear (Linear) │ (64, 300) float32 │ 235,500 942.0 KB │ 0 │ ├─────────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ mlp/linear_1 (Linear) │ (64, 100) float32 │ 30,100 120.4 KB │ 0 │ ├─────────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ mlp/linear_2 (Linear) │ (64, 10) float32 │ 1,010 4.0 KB │ 0 │ ├─────────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ mlp (MLP) │ (64, 10) float32 │ 266,610 1.1 MB │ 0 │ ╘═════════════════════════╧═══════════════════════╧═════════════════════╧═════════════════╛ Total Parameters: 266,610 1.1 MB Trainable Parameters: 266,610 1.1 MB Non-trainable Parameters: 0 ###Markdown Training the ModelHaving our `model` instance ready we now need to pass it some data to start training. Like in Keras this is done via the `fit` method which contains more or less the same signature. We try to be as compatible with Keras as possible here but also remove a lot of the Tensorflow specific stuff. The following code will train our model for `100` epochs while limiting each epoch to `200` steps and using a batch size of `64`: ###Code history = model.fit( x=X_train, y=y_train, epochs=100, steps_per_epoch=200, batch_size=64, validation_data=(X_test, y_test), shuffle=True, callbacks=[elegy.callbacks.ModelCheckpoint("model", save_best_only=True)], ) ###Output _____no_output_____ ###Markdown ```...Epoch 99/100200/200 [==============================] - 1s 5ms/step - l2_regularization_loss: 0.0094 - loss: 0.0105 - sparse_categorical_accuracy: 0.9958 - sparse_categorical_crossentropy_loss: 0.0011 - val_l2_regularization_loss: 0.0094 - val_loss: 0.0094 - val_sparse_categorical_accuracy: 0.9813 - val_sparse_categorical_crossentropy_loss: 7.4506e-09Epoch 100/100200/200 [==============================] - 1s 5ms/step - l2_regularization_loss: 0.0094 - loss: 0.0271 - sparse_categorical_accuracy: 0.9966 - sparse_categorical_crossentropy_loss: 0.0177 - val_l2_regularization_loss: 0.0094 - val_loss: 0.0094 - val_sparse_categorical_accuracy: 0.9806 - val_sparse_categorical_crossentropy_loss: 4.4703e-08 ```We've ported Keras beloved progress bar and also implemented its `Callback` and `History` APIs. `fit` returns a `history` object which we will use next to visualize how the metrics and losses evolved during training. ###Code import matplotlib.pyplot as plt def plot_history(history): n_plots = len(history.history.keys()) // 2 plt.figure(figsize=(14, 24)) for i, key in enumerate(list(history.history.keys())[:n_plots]): metric = history.history[key] val_metric = history.history[f"val_{key}"] plt.subplot(n_plots, 1, i + 1) plt.plot(metric, label=f"Training {key}") plt.plot(val_metric, label=f"Validation {key}") plt.legend(loc="lower right") plt.ylabel(key) plt.title(f"Training and Validation {key}") plt.show() plot_history(history) ###Output _____no_output_____ ###Markdown Doing InferenceHaving our trained model we can now get some samples from the test set and generate some predictions. First we will just pick some random samples using `numpy`: ###Code import numpy as np idxs = np.random.randint(0, 10000, size=(9,)) x_sample = X_test[idxs] ###Output _____no_output_____ ###Markdown Here we selected `9` random images. Now we can use the `predict` method to get their labels: ###Code y_pred = model.predict(x=x_sample) ###Output _____no_output_____ ###Markdown Easy right? Finally lets plot the results to see if they are accurate. ###Code plt.figure(figsize=(12, 12)) for i in range(3): for j in range(3): k = 3 * i + j plt.subplot(3, 3, k + 1) plt.title(f"{np.argmax(y_pred[k])}") plt.imshow(x_sample[k], cmap="gray") ###Output _____no_output_____ ###Markdown Perfect! Loading Saved ModelSince we used `elegy.callbacks.ModelCheckpoint` we can always restore our model from disk in the future. ###Code try: # current model reference print("current model reference:", model) model = elegy.model.load("model") except: print( "Could not load model, this is pobably due to a bug in `cloudpickle " "on certain python versions. For better results try Python >= 3.8. " "An alternative way to load the model is to manually build the model from " "the source code and use `model.load('model')` which will only load the weights + state." ) model.load("model") # new model reference print("new model reference: ", model) # check that it works! model.predict(x=x_sample).shape ###Output current model reference: <elegy.model.Model object at 0x7fd8beee4ac8> new model reference: <elegy.model.Model object at 0x7fd8a353e358> ###Markdown Getting started First things first, make sure you have [installed creme](installation.md).In `creme`, features are represented with dictionaries, where the keys correspond to the features names. For instance: ###Code import datetime as dt x = { 'shop': 'Ikea', 'city': 'Stockholm', 'date': dt.datetime(2020, 6, 1), 'sales': 42 } ###Output _____no_output_____ ###Markdown It is up to you, the user, to decide how to stream your data. `creme` offers a `stream` module which has various utilities for handling streaming data, such as `stream.iter_csv`. For the sake of example, `creme` also provides a `datasets` module which contains various streaming datasets. For example, the `datasets.Phishing` dataset contains records of [phishing](https://www.wikiwand.com/en/Phishing) attempts on web pages. ###Code from creme import datasets dataset = datasets.Phishing() print(dataset) ###Output _____no_output_____ ###Markdown The dataset is a streaming dataset, and therefore doesn't sit in memory. Instead, we can loop over each sample with a `for` loop: ###Code for x, y in dataset: pass print(x) print(y) ###Output _____no_output_____ ###Markdown Typically, models learn via a `fit_one(x, y)` method, which takes as input some features and a target value. Being able to learn with a single instance gives a lot of flexibility. For instance, a model can be updated whenever a new sample arrives from a stream. To exemplify this, let's train a logistic regression on the above dataset. ###Code from creme import linear_model model = linear_model.LogisticRegression() for x, y in dataset: model.fit_one(x, y) ###Output _____no_output_____ ###Markdown Predictions can be obtained by calling a model's `predict_one` method. In the case of a classifier, we can also use `predict_proba_one` to produce probability estimates. ###Code model = linear_model.LogisticRegression() for x, y in dataset: y_pred = model.predict_proba_one(x) model.fit_one(x, y) print(y_pred) ###Output {False: 0.7731541581376543, True: 0.22684584186234572} ###Markdown The `metrics` module gives access to many metrics that are commonly used in machine learning. Like the rest of `creme`, these metrics can be updated with one element at a time: ###Code from creme import metrics model = linear_model.LogisticRegression() metric = metrics.ROCAUC() for x, y in dataset: y_pred = model.predict_proba_one(x) model.fit_one(x, y) metric.update(y, y_pred) metric ###Output _____no_output_____ ###Markdown A common way to improve the performance of a logistic regression is to scale the data. This can be done by using a `preprocessing.StandardScaler`. In particular, we can define a pipeline to organise our model into a sequence of steps: ###Code from creme import compose from creme import preprocessing model = compose.Pipeline( preprocessing.StandardScaler(), linear_model.LogisticRegression() ) model.draw() metric = metrics.ROCAUC() for x, y in datasets.Phishing(): y_pred = model.predict_proba_one(x) model.fit_one(x, y) metric.update(y, y_pred) metric ###Output _____no_output_____ ###Markdown Getting Started and Usage Examples Basic OperationsCreate a new document within Python from a given string, set a document featureand add an annotation that spans the whole document to the defaultannotation set: ###Code from gatenlp import Document # Create the document doc1 = Document("This is the text of the document.") # set a document feature doc1.features["purpose"] = "simple illustration of gatenlp basics" # get the default annotation set defset = doc1.annset() # add an annotation that spans the whole document, no features defset.add(0, len(doc1), "Document", {}) ###Output _____no_output_____ ###Markdown Now save the document in bdocjson format and load back from the saveddocument: ###Code # save the document in bdocjson format, this can be read into Java GATE # using the format-bdoc plugin. doc1.save("testdoc.bdocjs") # Read back the document doc2 = Document.load("testdoc.bdocjs") # print the json representation of the loaded document print(doc2.save_mem(fmt="json")) ###Output {"annotation_sets": {"": {"name": "", "annotations": [{"type": "Document", "start": 0, "end": 33, "id": 0, "features": {}}], "next_annid": 1}}, "text": "This is the text of the document.", "features": {"purpose": "simple illustration of gatenlp basics"}, "offset_type": "p", "name": ""} ###Markdown Tokenize and create annotations for the tokens using a NLTK tokenizer: ###Code from gatenlp.processing.tokenizer import NLTKTokenizer from nltk.tokenize import TreebankWordTokenizer nltk_tok = TreebankWordTokenizer() ann_tok = NLTKTokenizer(nltk_tokenizer=nltk_tok, out_set="NLTK") doc1 = ann_tok(doc1) ###Output _____no_output_____ ###Markdown Get all the annotation for the NLTK tokens which are in annotation set "NLTK": ###Code # get the annotation set with the name "NLTK" and print the number of annotations set_nltk = doc1.annset("NLTK") print(len(set_nltk)) # get only annotations of annotation type "Token" from the set and print the number of annotations # since there are no other annotations in the original set, the number should be the same set_tokens = set_nltk.with_type("Token") print(len(set_tokens)) # print all the annotations in order for ann in set_tokens: print(ann) ###Output 8 8 Annotation(0,4,Token,features=Features({}),id=0) Annotation(5,7,Token,features=Features({}),id=1) Annotation(8,11,Token,features=Features({}),id=2) Annotation(12,16,Token,features=Features({}),id=3) Annotation(17,19,Token,features=Features({}),id=4) Annotation(20,23,Token,features=Features({}),id=5) Annotation(24,32,Token,features=Features({}),id=6) Annotation(32,33,Token,features=Features({}),id=7) ###Markdown The annotation set `set_nltk` is an original annotation set and reflects directly what is stored with the document: it is an "attached" annotation set. Adding or removing annotations in that set will change what is stored with the document. On the other hand, the annotation set `set_tokens` is the result of a filtering operation and is "detached". By default it is immutable, it cannot be modified. It can be made mutable, but when annotations are then added or removed from the set, this will not change what is stored with the document. ###Code # check if the set_nltk is detached print("set_nltk is detached: ", set_nltk.isdetached()) # no it is attached! print("set_nltk is immutable: ", set_nltk.immutable ) # no # add an annotation to the set set_nltk.add(3,5,"New1") # check if the set_tokens is detached print("set_tokens is detached: ", set_tokens.isdetached()) # yes! # check if it is immutable as well print("set_tokens is immutable: ", set_tokens.immutable ) # yes try: set_tokens.add(5,7,"New2") except Exception as ex: print("Error:", ex) # ok, let's make the set mutable and add the annotation set_tokens.immutable = False set_tokens.add(5,7,"New2") print("set_nltk: size=", len(set_nltk), ", annotation:", set_nltk) print("set_tokens: size=", len(set_tokens), ", annotations: ", set_tokens) ###Output set_nltk is detached: False set_nltk is immutable: False set_tokens is detached: True set_tokens is immutable: True Error: Cannot add an annotation to an immutable annotation set set_nltk: size= 9 , annotation: AnnotationSet([Annotation(0,4,Token,features=Features({}),id=0), Annotation(3,5,New1,features=Features({}),id=8), Annotation(5,7,Token,features=Features({}),id=1), Annotation(8,11,Token,features=Features({}),id=2), Annotation(12,16,Token,features=Features({}),id=3), Annotation(17,19,Token,features=Features({}),id=4), Annotation(20,23,Token,features=Features({}),id=5), Annotation(24,32,Token,features=Features({}),id=6), Annotation(32,33,Token,features=Features({}),id=7)]) set_tokens: size= 9 , annotations: AnnotationSet([Annotation(0,4,Token,features=Features({}),id=0), Annotation(5,7,Token,features=Features({}),id=1), Annotation(5,7,New2,features=Features({}),id=8), Annotation(8,11,Token,features=Features({}),id=2), Annotation(12,16,Token,features=Features({}),id=3), Annotation(17,19,Token,features=Features({}),id=4), Annotation(20,23,Token,features=Features({}),id=5), Annotation(24,32,Token,features=Features({}),id=6), Annotation(32,33,Token,features=Features({}),id=7)]) ###Markdown Adding the annotation with type "New1" to `set_nltk` actually stored the annotation with the document, but did not affect the filtered set `set_tokens`, and adding the annotation with type "New2" to the filtered set did not affect the set stored with the document. The document and document features and the annotation sets and annotation as well as their features can be shown in a Jupyter notebook by simply showing a document value: ###Code doc1 ###Output _____no_output_____ ###Markdown BBData Python WrapperHi! Here's a Python Notebook to get you started with the functions at yourdisposal to use BBData. PrerequisitesAsk a BBData Administrator for an account and create a credential file in `~/.bbdata/credentials.json` with the following content: ###Code { "username": "<my.username>", "password": "<my.password>" } ###Output _____no_output_____ ###Markdown A Few Words About the StructureThe wrapper is structured in two parts, the `output` and `input` module, sothat it fits with the HTTP APIs initially created.So, if you want to use the `output` endpoint, import the module as follow: ###Code from bbdata.endpoint import output ###Output Welcome frederic.montet ###Markdown It should greet you with your user name to signal you that the credential file hasread successfully. Access the DataNow, you can start using the API. Just login with ###Code output.login() ###Output _____no_output_____ ###Markdown And now, all the methods from BBData are at your disposal with the following nomenclature:```.._```where - `bbdata-api` can be `output` or `input`- `base-route` are the different routes listed in https://bbdata.daplab.ch/api/- `http-method` and `sub-route` are used together to create a function name in the form of e.g. `get_comments`.For sure, if you have a doubt about the available methods for a given route, check your auto-completion and read the method's class. When you finished, log out: ###Code output.logout() ###Output _____no_output_____ ###Markdown A few examples ###Code output.login(); ###Output _____no_output_____ ###Markdown Me ###Code # Get your profile output.me.get() # Get your object groups output.me.get_groups() ###Output _____no_output_____ ###Markdown Info ###Code # Get server informations output.info.get() ###Output _____no_output_____ ###Markdown Units ###Code # Get the 5 first units output.units.get()[:5] ###Output _____no_output_____ ###Markdown Object Groups ###Code # Get all object groups output.object_groups.get_all() # Get the object group with id 3 output.object_groups.get(3) ###Output _____no_output_____ ###Markdown Objects ###Code output.objects.get(2648) output.objects.get_tokens(2648) ###Output _____no_output_____ ###Markdown Values ###Code # Also try output.values.hours(...) and output.values.quarters(...) output.values.get( 2649, "2018-06-02T19:00", "2018-06-02T22:00", ) output.logout(); # Check if the logout was successful output.me.get() ###Output _____no_output_____ ###Markdown In this tutorial we will explore the basic features of **Elegy**. If you are a Keras user you should feel at home, if you are currently using Jax things will appear much more streamlined. To get started you will first need to install the following dependencies: ###Code ! pip install elegy dataget matplotlib ! pip install jax jaxlib ###Output _____no_output_____ ###Markdown Note that Elegy doesn't depend on `jax` since there are both `cpu` and `gpu` versions you can choose from, the previous block will install `jax-cpu`, if you want jax to run on gpu you will need to [install it](https://github.com/google/jaxinstallation) separately. If you are running this example on colab you are good to go since it comes with a GPU/TPU-enabled version of `jax` preinstalled. Loading the DataIn this tutorial we will train a Neural Network on the MNIST dataset, for this we will first need to download and load the data into memory. Here we will use `dataget` for simplicity but you can use you favorite datasets library. ###Code import dataget X_train, y_train, X_test, y_test = dataget.image.mnist(global_cache=True).get() print("X_train:", X_train.shape, X_train.dtype) print("y_train:", y_train.shape, y_train.dtype) print("X_test:", X_test.shape, X_test.dtype) print("y_test:", y_test.shape, y_test.dtype) ###Output X_train: (60000, 28, 28) uint8 y_train: (60000,) uint8 X_test: (10000, 28, 28) uint8 y_test: (10000,) uint8 ###Markdown In this case `dataget` loads the data from Yann LeCun's website. Defining the ArchitectureNow that we have the data we can define our model. In Elegy you can do this by inheriting from `elegy.Module` and defining a `call` method. This method should take in some inputs, perform a series of transformation using Jax, and returns the outputs of the network. In this example we will create a simple 2 layer MLP using Elegy modules: ###Code import jax.numpy as jnp import jax import elegy class MLP(elegy.Module): """Standard LeNet-300-100 MLP network.""" def __init__(self, n1: int = 300, n2: int = 100, **kwargs): super().__init__(**kwargs) self.n1 = n1 self.n2 = n2 def call(self, image: jnp.ndarray) -> jnp.ndarray: image = image.astype(jnp.float32) / 255.0 mlp = elegy.nn.sequential( elegy.nn.Flatten(), elegy.nn.Linear(self.n1), jax.nn.relu, elegy.nn.Linear(self.n2), jax.nn.relu, elegy.nn.Linear(10), ) return mlp(image) ###Output _____no_output_____ ###Markdown Here we are using `sequential` to stack two layers with `relu` activations and a final `Linear` layer with `10` units that represents the logits of the network. This code should feel familiar to most Keras / PyTorch users. The main difference here is that thanks to elegy's [hooks system](https://poets-ai.github.io/elegy/guides/module-system/) you can (uncoditionally) declare modules, parameters, and states right in your `call` (forward) function without having to explicitly assign them to properties. This tends to produce much more readable code and reduce boilerplate. Creating the ModelNow that we have this module we can create an Elegy `Model`. ###Code import optax model = elegy.Model( module=MLP(n1=300, n2=100), loss=[ elegy.losses.SparseCategoricalCrossentropy(from_logits=True), elegy.regularizers.GlobalL2(l=1e-4), ], metrics=elegy.metrics.SparseCategoricalAccuracy(), optimizer=optax.adam(1e-3), ) ###Output _____no_output_____ ###Markdown Much like `keras.Model`, an Elegy Model is tasked with performing training, evaluation, and inference. The constructor of this class accepts most of the arguments accepted by `keras.Model.compile` as you might have seen but there are some notable differences:1. It requires you to pass a `module` as first argument.2. `loss` can be a list even if we don't have multiple corresponding outputs/labels, this is because Elegy exposes a more [flexible system](https://poets-ai.github.io/elegy/guides/modules-losses-metrics/) for defining losses and metrics based on Dependency Injection. As in Keras, you can get a rich description of the model by calling `Model.summary` with a sample input: ###Code model.summary(X_train[:64]) ###Output ╒═════════════════════╤═══════════════════════╤═════════════════════╤═════════════════╕ │ Layer │ Outputs Shape │ Trainable │ Non-trainable │ │ │ │ Parameters │ Parameters │ ╞═════════════════════╪═══════════════════════╪═════════════════════╪═════════════════╡ │ Inputs │ (64, 28, 28) uint8 │ 0 │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ flatten (Flatten) │ (64, 784) float32 │ 0 │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ linear (Linear) │ (64, 300) float32 │ 235,500 942.0 KB │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ relu │ (64, 300) float32 │ 0 │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ linear_1 (Linear) │ (64, 100) float32 │ 30,100 120.4 KB │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ relu_1 │ (64, 100) float32 │ 0 │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ linear_2 (Linear) │ (64, 10) float32 │ 1,010 4.0 KB │ 0 │ ├─────────────────────┼───────────────────────┼─────────────────────┼─────────────────┤ │ Outputs (MLP) │ (64, 10) float32 │ 0 │ 0 │ ╘═════════════════════╧═══════════════════════╧═════════════════════╧═════════════════╛ Total Parameters: 266,610 1.1 MB Trainable Parameters: 266,610 1.1 MB Non-trainable Parameters: 0 ###Markdown Training the ModelHaving our `model` instance we are now ready to pass it some data to start training. Like in Keras this is done via the `fit` method which contains more or less the same signature. We try to be as compatible with Keras as possible but also remove a lot of the Tensorflow specific stuff. The following code will train our model for `100` epochs while limiting each epoch to `200` steps and using a batch size of `64`: ###Code history = model.fit( x=X_train, y=y_train, epochs=100, steps_per_epoch=200, batch_size=64, validation_data=(X_test, y_test), shuffle=True, callbacks=[elegy.callbacks.ModelCheckpoint("model", save_best_only=True)], ) ###Output _____no_output_____ ###Markdown ```...Epoch 99/100200/200 [==============================] - 1s 4ms/step - l2_regularization_loss: 0.0452 - loss: 0.0662 - sparse_categorical_accuracy: 0.9928 - sparse_categorical_crossentropy_loss: 0.0210 - val_l2_regularization_loss: 0.0451 - val_loss: 0.1259 - val_sparse_categorical_accuracy: 0.9766 - val_sparse_categorical_crossentropy_loss: 0.0808Epoch 100/100200/200 [==============================] - 1s 4ms/step - l2_regularization_loss: 0.0450 - loss: 0.0610 - sparse_categorical_accuracy: 0.9953 - sparse_categorical_crossentropy_loss: 0.0161 - val_l2_regularization_loss: 0.0447 - val_loss: 0.1093 - val_sparse_categorical_accuracy: 0.9795 - val_sparse_categorical_crossentropy_loss: 0.0646 ```As you see we've ported Keras progress bar and also implemented its `Callback` and `History` APIs. `fit` returns a `history` object which we will use next to visualize how the metrics and losses evolved during training. ###Code import matplotlib.pyplot as plt def plot_history(history): n_plots = len(history.history.keys()) // 2 plt.figure(figsize=(14, 24)) for i, key in enumerate(list(history.history.keys())[:n_plots]): metric = history.history[key] val_metric = history.history[f"val_{key}"] plt.subplot(n_plots, 1, i + 1) plt.plot(metric, label=f"Training {key}") plt.plot(val_metric, label=f"Validation {key}") plt.legend(loc="lower right") plt.ylabel(key) plt.title(f"Training and Validation {key}") plt.show() plot_history(history) ###Output _____no_output_____ ###Markdown Generating PredictionsHaving our trained model we can now get some samples from the test set and generate some predictions. First we will just pick some random samples using `numpy`: ###Code import numpy as np idxs = np.random.randint(0, 10000, size=(9,)) x_sample = X_test[idxs] ###Output _____no_output_____ ###Markdown Here we selected `9` random images. Now we can use the `predict` method to get their labels: ###Code y_pred = model.predict(x=x_sample) ###Output _____no_output_____ ###Markdown Easy right? Finally lets plot the results to see if they are accurate. ###Code plt.figure(figsize=(12, 12)) for i in range(3): for j in range(3): k = 3 * i + j plt.subplot(3, 3, k + 1) plt.title(f"{np.argmax(y_pred[k])}") plt.imshow(x_sample[k], cmap="gray") ###Output _____no_output_____ ###Markdown Perfect! SerializationTo serialize the `Model` you can just use the `model.save(...)` method, this will create a folder with some files that contain the model's code plus all parameters and states. However, here we don't need to do that since we previously added the `elegy.callbacks.ModelCheckpoint` callback on `fit` which periodically does this for us during training. We configured `ModelCheckpoint` to save our model to a folder called `"model"` so we can just load it from there using `elegy.model.load`. Lets get a new model reference containing the same weights and call its `evaluate` method to verify everything loaded correctly: ###Code # current model reference print("current model id:", id(model)) # load model from disk model = elegy.model.load("model") # new model reference print("new model id: ", id(model)) # check that it works! model.evaluate(x=X_test, y=y_test) ###Output current model id: 140137340602160 new model id: 140136071352432 ###Markdown Jupyter-flex allows you to create interactive dashboards based on Jupyter Notebooks based on two simple concepts:1. Control the layout of the dashboard using markdown headers2. Define the dashboard components using Jupyter Notebook cell tags Your first dashboardLet's take a very simple Jupyter Notebook with 3 cells and one plot and convert it to a dashboard. ###Code import plotly.express as px df = px.data.iris() fig = px.scatter(df, x="sepal_width", y="sepal_length") fig.show() ###Output _____no_output_____ ###Markdown All you need to do to convert this to a dashboard is to add tag with the value `body` to the that outs the plot. How to view and add tags to cells in Jupyter Notebook In the top navigation go to View > Cell Toolbar > Tags Then type "body" in the new input of target cell and click on "Add tag" or press enter ###Code fig = px.scatter(df, x="sepal_width", y="sepal_length") fig.show() ###Output _____no_output_____ ###Markdown Converting the Notebook to a dashboard From here there is a couple of options to convert the notebook to a html dashboard.1. You can execute the notebook as you normaly do in the Jupyter Notebook UI and then select: `File > Download as > Flex Dashboard (.html)`:![Jupyter-flex Download As](/assets/img/getting-started/download-as.png)2. You can go in a terminal and run `nbconvert`:Terminal```$ jupyter nbconvert --to flex notebook.ipynb```Optionally add the `--execute` flag to execute the notebook before converting it so the outputs are shown in the dashboard.Terminal```$ jupyter nbconvert --to flex notebook.ipynb --execute```Open the resulting `.html` file in a browser and the result will be:[![Jupyter-flex one plot](/assets/img/screenshots/getting-started/one-plot.png)](/examples/one-plot.html)Click on the image to open the rendered dashboardYou might notice that the default title of the dashboard is the name of the notebook file, you can customize these using [parameters](parameters-orientation-and-title).This is a very simple example, now let's look at the card concept of Jupyter-flex. Cards: Multiple outputsA Card is an object that holds one or more components of the dashboard such as markdown or any output generated from the execution of a Cell such as plots, text and widgets.To learn more about cards and its options go to [Layout > Cards](/layouts/cards).You define a new Card by adding a level-3 markdown header (``).Any output from a tagged Cell will be added to the current Card until a new Card, Section or Page is defined.Going back to the notebook example we can add a new plot to the by adding two new cells:1. One markdown cell with a level-3 markdown header (``)2. One code cell with the `body` tag ###Code ### Second plot fig = px.scatter(df, x="petal_width", y="petal_length") fig.show() ###Output _____no_output_____ ###Markdown [![Jupyter-flex two plots](/assets/img/screenshots/getting-started/two-plots.png)](/examples/two-plots.html)Click on the image to open the rendered dashboardYou will notice two things:1. The default layout is a single column with cards stacked vertically and sized to fill available browser height.2. The value of the level-3 markdown header is added to the Card header Sections: Multiple columnsTo add another column to the dashboard define a new Section using a level 2 markdown header (``)In this case, the value of the header is irrelevant (since the default theme doesn't show it), it acts as an indicator to create a new Section. ###Code ## Column fig = px.scatter(df, x="sepal_length", y="petal_length") fig.show() ###Output _____no_output_____ ###Markdown In this case the result would be:[![Jupyter-flex two columns](/assets/img/screenshots/getting-started/two-columns.png)](/examples/two-columns.html)Click on the image to open the rendered dashboardYou will notice another default orientation: to have multiple sections as columns. Parameters: Orientation and titleYou can control the parameters of the dashboard such as title and orientation to be based of rows instead on columnsby tagging a code cell with `parameters`.Let's change the orientation of the plot to `rows` and add a title of `A Flex dashboard`. ###Code flex_title = "A flex dashboard" flex_orientation = "rows" ###Output _____no_output_____ ###Markdown Getting started with Xanthus What is Xanthus?Xanthus is a Neural Recommender package written in Python. It started life as a personal project to take an academic ML paper and translate it into a 'production-ready' software package and to replicate the results of the paper along the way. It uses Tensorflow 2.0 under the hood, and makes extensive use of the Keras API. If you're interested, the original authors of [the paper that inspired this project](https://dl.acm.org/doi/10.1145/3038912.3052569) provided code for their experiments, and this proved valuable when starting this project. However, while it is great that they provided their code, the repository isn't maintained, the code uses an old versions of Keras (and Theano!), it can be a little hard for beginners to get to grips with, and it's very much tailored to produce the results in their paper. All fair enough, they wrote a great paper and published their workings. Admirable stuff. Xanthus aims to make it super easy to get started with the work of building a neural recommendation system, and to scale the techniques in the original paper (hopefully) gracefully with you as the complexity of your applications increase.This notebook will walk you through a basic example of using Xanthus to predict previously unseen movies to a set of users using the classic 'Movielens' recommender dataset. The [original paper](https://dl.acm.org/doi/10.1145/3038912.3052569) tests the architectures in this paper as part of an _implicit_ recommendation problem. You'll find out more about what this means later in the notebook. In the meantime, it is worth remembering that the examples in this notebook make the same assumption.Ready for some code? Loading a sample datasetAh, the beginning of a brand new ML problem. You'll need to download the dataset first. You can use the Xanthus `download.movielens` utility to download, unzip and save your Movielens data. ###Code from xanthus import datasets datasets.movielens.download(version="ml-latest-small", output_dir="data") ###Output _____no_output_____ ###Markdown Time to crack out Pandas and load some CSVs. You know the drill. ###Code import pandas as pd ratings = pd.read_csv("data/ml-latest-small/ratings.csv") movies = pd.read_csv("data/ml-latest-small/movies.csv") ###Output _____no_output_____ ###Markdown Let's take a look at the data we've loaded. Here's the movies dataset: ###Code movies.head() ###Output _____no_output_____ ###Markdown As you can see, you've got the unique identifier for your movies, the title of the movie in human-readable format, and then the column `genres` that has a string containing a set of associated genres for the given movie. Straightforward enough. And hey, that `genres` column might come in handy at some point...On to the `ratings` frame. Here's what is in there: ###Code ratings.head() ###Output _____no_output_____ ###Markdown First up, you've got a `userId` corresponding to the unique user identifier, and you've got the `movieId` corresponding to the unique movie identifier (this maps onto the `movieId` column in the `movies` frame, above). You've also got a `rating` field. This is associated with the user-assigned rating for that movie. Finally, you have the `timestamp` -- the date at which the user rated the movie. For future reference, you can convert from this timestamp to a 'human readable' date with: ###Code from datetime import datetime datetime.fromtimestamp(ratings.iloc[0]["timestamp"]).strftime("%Y-%m-%d %H:%M:%S") ###Output _____no_output_____ ###Markdown Thats your freebie for the day. Onto getting the data ready for training your recommender model. Data preparationXanthus provides a few utilities for getting your recommender up and running. One of the more ubiquitous utilities is the `Dataset` class, and its related `DatasetEncoder` class. At the time of writing, the `Dataset` class assumes your 'ratings' data is in the format `user`, `item`, `rating`. You can rename the sample data to be in this format with: ###Code ratings = ratings.rename(columns={"userId": "user", "movieId": "item"}) ###Output _____no_output_____ ###Markdown Next, you might find it helpful to re-map the movie IDs (now under the `item` column) to be the `titles` in the `movies` frame. This'll make it easier for you to see what the recommender is recommending! Don't do this for big datasets though -- it can get very expensive very quickly! Anyway, remap the `item` column with: ###Code title_mapping = dict(zip(movies["movieId"], movies["title"])) ratings.loc[:, "item"] = ratings["item"].apply(lambda _: title_mapping[_]) ratings.head(2) ###Output _____no_output_____ ###Markdown A little more meaningful, eh? For this example, you are going to be looking at _implicit_ recommendations, so should also remove clearly negative rating pairs from the dataset. You can do this with: ###Code ratings = ratings[ratings["rating"] > 3.0] ###Output _____no_output_____ ###Markdown Leave one out protocolAs with any ML model, it is important to keep a held-out sample of your dataset to evaluate your model's performance. This is naturally important for recommenders too. However, recommenders differ slightly in that we are often interested in the recommender's ability to _rank_ candidate items in order to surface the most relevant content to a user. Ultimately, the essence of recommendation problems is search, and getting relevant items in the top `n` search results is generally the name of the game -- absolute accuracy can often be a secondary consideration.One common way of evaluating the performance of a recommender model is therefore to create a test set by sampling `n` items from each user's `m` interactions (e.g. movie ratings), keeping `m-n` interactions in the training set and putting the 'left out' `n` samples in the test set. The thought process then goes that when evaluating a model on this test set, you should see the model rank the 'held' out samples more highly in the results (i.e. it has started to learn a user's preferences). The 'leave one out' protocol is a specific case of this approach where `n=1`. Concretely, when creating a test set using 'leave one out', you withold a single interaction from each user and put these in your test set. You then place all other interactions in your training set. To get you going, Xanthus provides a utility function called -- funnily enough -- `leave_one_out` under the `evaluate` subpackage. You can import it and use it as follows: ###Code from xanthus.evaluate import leave_one_out train_df, test_df = leave_one_out(ratings, shuffle=True, deduplicate=True) ###Output _____no_output_____ ###Markdown You'll notice that there's a couple of things going on here. Firstly, the function returns the input interactions frame (in this case `ratings`) and splits it into the two datasets as expected. Fair enough. We then have two keyword arguments `shuffle` and `deduplicate`. The argument `shuffle` will -- you guessed it -- shuffle your dataset before sampling interactions for your test set. This is set to `True` by default, so it is shown here for the purpose of being explicit. The second argument is `deduplicate`. This does what you might expect too -- it strips any cases where a user interacts with a specific item more than once (i.e. a given user-item pair appears more than once).As discussed above, the `leave_one_out` function is really a specific version of a more general 'leave `n` out' approach to splitting a dataset. There's also other ways you might want to split datasets for recommendation problems. For many of those circumstances, Xanthus provides a more generic `split` function. This was inspired by Azure's [_Recommender Split_](https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/split-data-using-recommender-split:~:text=The%20Recommender%20Split%20option%20is,user%2Ditem%2Drating%20triples) method in Azure ML Studio. There are a few important tweaks in the Xanthus implementation, so make sure to check out that functions documentation if you're interested.Anyway, time to build some datasets. Introducing the `Dataset`Like other ML problems, recommendation problems typically need to create encoded representations of a domain in order to be passed into a model for training and evaluation. However, there's a few aspects of recommendation problems that can make this problem particularly fiddly. To help you on your way, Xanthus provides a few utilities, including the `Dataset` class and the `DatasetEncoder` class. These structures are designed to take care of the fiddliness for you. They'll build your input vectors (including with metadata, if you provide it -- more on that later) and sparse matrices as required. You shouldn't need to touch a thing. Here's how it works. First, your 'train' and 'test' datasets are going to need to share the same encodings, right? Otherwise they'll disagree on whether `Batman Forever (1995)` shares the same encoding across the datasets, and that would be a terrible shame. To create your `DatasetEncoder` you can do this: ###Code from xanthus.datasets import DatasetEncoder encoder = DatasetEncoder() encoder.fit(ratings["user"], ratings["item"]) ###Output _____no_output_____ ###Markdown This encoder will store all of the unique encodings of every user and item in the `ratings` set. Notice that you're passing in the `ratings` set here, as opposed to either train or test. This makes doubly sure you're creating encodings for every user-item pair in the dataset. To check this has worked, you can call the `transform` method on the encoder like this: ###Code encoder.transform(items=["Batman Forever (1995)"]) ###Output _____no_output_____ ###Markdown The naming conventions on the `DatasetEncoder` are deliberately reminicent of the methods on Scikit-Learn encoders, just to help you along with using them. Now you've got your encoder, you can create your `Dataset` objects: ###Code from xanthus.datasets import Dataset, utils train_ds = Dataset.from_df(train_df, normalize=utils.as_implicit, encoder=encoder) test_ds = Dataset.from_df(test_df, normalize=utils.as_implicit, encoder=encoder) ###Output _____no_output_____ ###Markdown Let's unpack what's going on here. The `Dataset` class provides the `from_df` class method for quickly constructing a `Dataset` from a 'raw' Pandas `DataFrame`. You want to create a train and test dataset, hence creating two separate `Dataset` objects using this method. Next, you can see that the `encoder` keyword argument is passed in to the `from_df` method. This ensures that each `Dataset` maintains a reference to the _same_ `DatasetEncoder` to ensure consistency when used. The final argument here is `normalize`. This expects a callable object (e.g. a function) that scales the `rating` column (if provided). In the case of this example, the normalization is simply to treat the ratings as an implicit recommendation problem (i.e. all zero or one). The `utils.as_implicit` function simply sets all ratings to one. Simple enough, eh?And that is it for preparing your datasets for modelling, at least for now. Time for some Neural Networks. Getting neuralWith your datasets ready, you can build and fit your model. In the example, the `GeneralizedMatrixFactorization` (or `GMFModel`) is used. If you're not sure what a GMF model is, be sure to check out the original paper, and the GMF class itself in the Xanthus docs. Anyway, here's how you set it up: ###Code from xanthus.models import GeneralizedMatrixFactorization as GMFModel model = GMFModel(train_ds.user_dim, train_ds.item_dim, factors=64) model.compile(optimizer="adam", loss="binary_crossentropy") ###Output _____no_output_____ ###Markdown So what's going on here? Well, `GMFModel` is a _subclass_ of the Keras `Model` class. Consequently, is shares the same interface. You will initialize your model with specific information (in this case information related to the size of the user and item input vectors and the size of the latent factors you're looking to compute), compile the model with a given loss and optimizer, and then train it. Straightforward enough, eh? In principle, you can use `GMFModel` however you'd use a 'normal' Keras model.You're now ready to fit your model. You can do this with: ###Code # prepare training data users_x, items_x, y = train_ds.to_components( negative_samples=4 ) model.fit([users_x, items_x], y, epochs=5) ###Output Epoch 1/5 5729/5729 [==============================] - 7s 1ms/step - loss: 0.5001 Epoch 2/5 5729/5729 [==============================] - 7s 1ms/step - loss: 0.3685 Epoch 3/5 5729/5729 [==============================] - 7s 1ms/step - loss: 0.2969 Epoch 4/5 5729/5729 [==============================] - 7s 1ms/step - loss: 0.2246 Epoch 5/5 5729/5729 [==============================] - 7s 1ms/step - loss: 0.1581 ###Markdown Remember that (as with any ML model) you'll want to tweak your hyperparameters (e.g. `factors`, regularization, etc.) to optimize your model's performance on your given dataset. The example model here is just a quick un-tuned model to show you the ropes. Evaluating the modelNow to diagnose how well your model has done. The evaluation protocol here is set up in accordance with the methodology outlined in [the original paper](). To get yourself ready to generate some scores, you'll need to run: ###Code from xanthus.evaluate import create_rankings users, items = create_rankings( test_ds, train_ds, output_dim=1, n_samples=100, unravel=True ) ###Output _____no_output_____ ###Markdown So, what's going on here? First, you're importing the `create_rankings` function. This implements a sampling approach used be _He et al_ in their work. The idea is that you evaluate your model on the user-item pairs in your test set, and for each 'true' user-item pair, you sample `n_samples` negative instances for that user (i.e. items they haven't interacted with). In the case of the `create_rankings` function, this produces and array of shape `n_users, n_samples + 1`. Concretely, for each user, you'll get an array where the first element is a positive sample (something they _did_ interact with) and `n_samples` negative samples (things they _did not_ interact with). The rationale here is that by having the model rank these `n_samples + 1` items for each user, you'll be able to determine whether your model is learning an effective ranking function -- the positive sample _should_ appear higher in the recommendations than the negative results if the model is doing it's job. Here's how you can rank these sampled items: ###Code from xanthus.models import utils test_users, test_items, _ = test_ds.to_components(shuffle=False) scores = model.predict([users, items], verbose=1, batch_size=256) recommended = utils.reshape_recommended(users.reshape(-1, 1), items.reshape(-1, 1), scores, 10, mode="array") ###Output 240/240 [==============================] - 0s 540us/step ###Markdown And finally for the evaluation, you can use the `score` function and the provided `metrics` in the Xanthus `evaluate` subpackage. Here's how you can use them: ###Code from xanthus.evaluate import score, metrics print("t-nDCG", score(metrics.truncated_ndcg, test_items, recommended).mean()) print("HR@k", score(metrics.precision_at_k, test_items, recommended).mean()) ###Output t-nDCG 0.4719391834962755 HR@k 0.7351973684210527 ###Markdown Looking okay. Good work. Going into detail on how the metrics presented here work is beyond the scope of this notebook. If you're interested in what is going on here, make sure to check out the docs (docstrings) in the Xanthus package itself. The fun bitAfter all of that, it is time to see what you've won. Exciting times. You can generate recommendations for your users _from unseen items_ by using the following: ###Code scores = model.predict([users, items], verbose=1, batch_size=256) recommended = utils.reshape_recommended(users.reshape(-1, 1), items.reshape(-1, 1), scores, 10, mode="array") ###Output 240/240 [==============================] - 0s 578us/step ###Markdown Recall that the first 'column' in the `items` array corresponds to positive the positive sample for a user. You can skip that here. So now you have a great big array of integers. Not as exciting as you'd hoped? Fair enough. Xanthus provides a utility to convert the outputs of your model predictions into a more readable Pandas `DataFrame`. Specifically, your `DatasetEncoder` has the handy `to_df` method for just this job. Give it a set of _encoded_ users and a list of _encoded_ items for each user, and it'll build you a nice `DataFrame`. Here's how: ###Code recommended_df = encoder.to_df(test_users.flatten(), recommended) recommended_df.head(25) ###Output _____no_output_____ ###Markdown Jupyter-flex allows you to create dashboards based on Jupyter Notebooks based on two simple concepts:1. Control the layout of the dashboard using markdown headers (``, `` and `)2. Define the dashboard components using Jupyter Notebook cell tags (`body` and others) Your first dashboardLet's take a very simple Jupyter Notebook with one plot and make a dashboard.The notebook is: ###Code import numpy as np import pandas as pd import altair as alt from vega_datasets import data alt.renderers.set_embed_options(actions=False) np.random.seed(42) source = data.cars() plot = alt.Chart(source).mark_circle(size=60).encode( x='Horsepower', y='Miles_per_Gallon', color='Origin', tooltip=['Name', 'Origin', 'Horsepower', 'Miles_per_Gallon'] ) plot ###Output _____no_output_____ ###Markdown All you need to do to convert this to a dashboard is to add a `body` tag to the that cell that has the plot as the output. How to view and add tags to cells in Jupyter Lab You can find a tag editor by clicking the gears icon on the top right icon on the right sidebar How to view and add tags to cells in Jupyter Classic Notebook In the top navigation go to View > Cell Toolbar > Tags Then type "body" in the new input of target cell and click on "Add tag" or press enter Responsive plots Depending on the plotting library you use you might need to add a bit of code to make the plot occupy all the space of the card. See the plotting page for more info. Converting the Notebook to an HTML fileThere are a couple of options to convert the notebook to a html dashboard.1. Execute the notebook as you normaly do in the Jupyter Notebook UI and then select: `File > Download as > Flex Dashboard (.html)`:![Jupyter-flex Download As](/assets/img/getting-started/download-as.png)2. You can go in a terminal and run `nbconvert`:Terminal```$ jupyter nbconvert --to flex notebook.ipynb```Optionally add the `--execute` flag to execute the notebook before converting them to a dashbaord.Terminal```$ jupyter nbconvert --to flex notebook.ipynb --execute```Open the resulting `.html` file in a browser and the result will be:[![](/assets/img/screenshots/jupyter_flex.tests.test_examples/docs_1-one-plot-reference.png)](/examples/1-one-plot.html)Click on the image to open dashboardYou might notice that the default title of the dashboard is the name of the notebook file, you can customize these using [parameters](parameters-orientation-and-title). Cards: Multiple outputsA Card is an object that holds one or more Cells. Cells can be markdown or code cells with outputs such as plots, text and widgets.You define a new Card by adding a level-3 markdown header (``).Any output from a tagged Cell will be added to the current Card until a new Card, Section or Page is defined.Going back to the notebook example we can add a new plot to the by adding two new cells:1. One markdown cell with a level-3 markdown header (``)2. One code cell with the `body` tag ###Code ### Second plot source = data.stocks() plot = alt.Chart(source).mark_area( color="lightblue", interpolate='step-after', line=True ).encode( x='date', y='price' ).transform_filter(alt.datum.symbol == 'GOOG') plot ###Output _____no_output_____ ###Markdown [![](/assets/img/screenshots/jupyter_flex.tests.test_examples/docs_2-two-plots-reference.png)](/examples/2-two-plots.html)Click on the image to open dashboardYou will notice two things:1. The default layout is a single column with cards stacked vertically and sized to fill available browser height.2. The value of the level-3 markdown header is added as the Card title Sections: Multiple columnsTo add another column to the dashboard define a new Section using a level 2 markdown header (``)In this case, the value of the header is irrelevant (it wont be shown on the dashboard) it just acts as an indicator to create a new Section. ###Code ## Column source = data.iris() plot = alt.Chart(source).mark_circle().encode( alt.X('sepalLength', scale=alt.Scale(zero=False)), alt.Y('sepalWidth', scale=alt.Scale(zero=False, padding=1)), color='species', size='petalWidth' ) plot ###Output _____no_output_____ ###Markdown In this case the result would be:[![](/assets/img/screenshots/jupyter_flex.tests.test_examples/docs_3-two-columns-reference.png)](/examples/3-two-columns.html)Click on the image to open dashboardYou will notice another default orientation: to have multiple Sections as columns. Parameters: Orientation and titleYou can control the parameters of the dashboard such as title, orientation and more by adding a `parameters` tag to a code.Let's add a title of `My first Flex dashboard` and change the orientation of the sections to `rows` ###Code flex_title = "My first Flex dashboard" flex_orientation = "rows" ###Output _____no_output_____ ###Markdown Getting StartedTo get started with atomphys, you can install it with pip:```console$ pip install atomphys```Alternatively, you can run any of these examples in [binder](https://mybinder.org/) without any installation simply by clicking the link at the top of the page.To start with, import the `Atom` object, and use it to create a Cesium atom. By default, this will automatically populate with all of the states and transitions in the NIST Atomic Spectra Database. ###Code from atomphys import Atom Cs = Atom('Cs') Cs ###Output _____no_output_____ ###Markdown StatesYou can then lookup states either by *index* (the states are energy ordered), or by searching by term symbol: ###Code Cs[1] Cs('P3/2') ###Output _____no_output_____ ###Markdown You can access properties of a state like the energy or lifetime. ###Code Cs('P1/2').energy Cs('P1/2').τ Cs('P1/2').lifetime ###Output _____no_output_____ ###Markdown By default atomphys uses [atomic units](https://en.wikipedia.org/wiki/Hartree_atomic_units). You can use pint to convert to units of your choice. ###Code Cs('P1/2').τ.to('ns') (1 / Cs('P1/2').lifetime ).to('MHz') ###Output _____no_output_____ ###Markdown TransitionsYou can access transitions origination from a state, ###Code Cs('S1/2').to('P1/2') ###Output _____no_output_____ ###Markdown as well as properties of that transition, such as wavelength, dipole matrix element, and saturation intensity ###Code Cs('S1/2').to('P1/2').λ.to('nm') Cs('S1/2').to('P1/2').reduced_dipole_matrix_element.to('e a0') Cs('S1/2').to('P1/2').Isat.to('mW/cm^2') ###Output _____no_output_____ ###Markdown Getting StartedTo get started with atomphys, you can install it with pip:```consolepip install atomphys```Alternatively, you can run any of these examples in [binder](https://mybinder.org/) without any installation simply by clicking the link at the top of the page.To start with, import the `Atom` object, and use it to create a Cesium atom. By default, this will automatically populate with all of the states and transitions in the NIST Atomic Spectra Database. ###Code from atomphys import Atom Cs = Atom('Cs') Cs ###Output _____no_output_____ ###Markdown StatesYou can then lookup states either by *index* (the states are energy ordered), or by searching by term symbol: ###Code Cs(1) Cs('P3/2') ###Output _____no_output_____ ###Markdown You can access properties of a state like the energy or lifetime. ###Code Cs('P1/2').energy Cs('P1/2').τ Cs('P1/2').lifetime ###Output _____no_output_____ ###Markdown By default atomphys uses [atomic units](https://en.wikipedia.org/wiki/Hartree_atomic_units). You can use pint to convert to units of your choice. ###Code Cs('P1/2').τ.to('ns') (1 / Cs('P1/2').lifetime ).to('MHz') ###Output _____no_output_____ ###Markdown TransitionsYou can access transitions origination from a state, ###Code Cs('S1/2').to('P1/2') ###Output _____no_output_____ ###Markdown as well as properties of that transition, such as wavelength, dipole matrix element, and saturation intensity ###Code Cs('S1/2').to('P1/2').λ.to('nm') Cs('S1/2').to('P1/2').matrix_element.to('e a0') Cs('S1/2').to('P1/2').Isat.to('mW/cm^2') ###Output _____no_output_____
Local_Histogram_Equalization.ipynb
###Markdown Local Histogram Equalization Dr. Tirthajyoti Sarkar, Fremont CA 94536, June 2019Histogram equalization increases the global contrast the test image, especially when the usable data of the image is represented by close contrast values. Through this adjustment, the intensities can be better distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the most frequent intensity values. ###Code import numpy as np import matplotlib import matplotlib.pyplot as plt from skimage import data from skimage.util.dtype import dtype_range from skimage.util import img_as_ubyte from skimage import exposure from skimage.morphology import disk from skimage.filters import rank ###Output _____no_output_____ ###Markdown Function to display an image along with its histogram and cumulative distribution function (CDF) ###Code def plot_img_and_hist(image, axes, bins=256): """Plot an image along with its histogram and cumulative histogram. """ ax_img, ax_hist = axes ax_cdf = ax_hist.twinx() # Display image ax_img.imshow(image, cmap=plt.cm.gray) ax_img.set_axis_off() # Display histogram ax_hist.hist(image.ravel(), bins=bins) ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0)) ax_hist.set_xlabel('Pixel intensity') xmin, xmax = dtype_range[image.dtype.type] ax_hist.set_xlim(xmin, xmax) # Display cumulative distribution img_cdf, bins = exposure.cumulative_distribution(image, bins) ax_cdf.plot(bins, img_cdf, 'r',lw=3) return ax_img, ax_hist, ax_cdf ###Output _____no_output_____ ###Markdown Load an example image ###Code img = img_as_ubyte(data.moon()) ###Output _____no_output_____ ###Markdown Global equalization (using `exposure.equalize_hist()`) ###Code img_rescale = exposure.equalize_hist(img) ###Output _____no_output_____ ###Markdown Local equalization (using `rank.equalize()`) ###Code selem = disk(30) img_eq = rank.equalize(img, selem=selem) fig = plt.figure(figsize=(8, 5)) axes = np.zeros((2, 3), dtype=np.object) axes[0, 0] = plt.subplot(2, 3, 1) axes[0, 1] = plt.subplot(2, 3, 2, sharex=axes[0, 0], sharey=axes[0, 0]) axes[0, 2] = plt.subplot(2, 3, 3, sharex=axes[0, 0], sharey=axes[0, 0]) axes[1, 0] = plt.subplot(2, 3, 4) axes[1, 1] = plt.subplot(2, 3, 5) axes[1, 2] = plt.subplot(2, 3, 6) ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0]) ax_img.set_title('Low contrast image') ax_hist.set_ylabel('Number of pixels') ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1]) ax_img.set_title('Global equalization') ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2]) ax_img.set_title('Local equalization') ax_cdf.set_ylabel('Fraction of total intensity') # prevent overlap of y-axis labels fig.tight_layout() plt.show() ###Output _____no_output_____
Classifiers/Basic_Features_2048_5e-3_selu_mult1.ipynb
###Markdown Firstbatch size 2048 lr 5e-4 Import modules ###Code %matplotlib inline from __future__ import division import sys import os os.environ['MKL_THREADING_LAYER']='GNU' sys.path.append('../') from Modules.Basics import * from Modules.Class_Basics import * ###Output /home/giles/anaconda2/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters /home/giles/anaconda2/lib/python2.7/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead. from pandas.core import datetools Using TensorFlow backend. ###Markdown Options ###Code classTrainFeatures = basic_features classModel = 'modelSelu' varSet = "basic_features" nSplits = 10 ensembleSize = 10 ensembleMode = 'loss' maxEpochs = 200 compileArgs = {'loss':'binary_crossentropy', 'optimizer':'adam'} trainParams = {'epochs' : 1, 'batch_size' : 2048, 'verbose' : 0} modelParams = {'version':classModel, 'nIn':len(classTrainFeatures), 'compileArgs':compileArgs} print "\nTraining on", len(classTrainFeatures), "features:", [var for var in classTrainFeatures] ###Output Training on 5 features: ['jetPt', 'jetEta', 'jetMass', 'ntracks', 'ntowers'] ###Markdown Import data ###Code trainData = h5py.File(dirLoc + 'train.hdf5', "r+") valData = h5py.File(dirLoc + 'testing.hdf5', "r+") ###Output _____no_output_____ ###Markdown Determine LR ###Code lrFinder = batchLRFindClassifier(trainData, nSplits, getClassifier, modelParams, trainParams, lrBounds=[1e-7,1e-2], trainOnWeights=False, verbose=0) compileArgs['lr'] = 5e-3 ###Output _____no_output_____ ###Markdown Train classifier ###Code results, histories = batchTrainClassifier(trainData, nSplits, getClassifier, modelParams, trainParams, patience=10, cosAnnealMult=1, trainOnWeights=False, maxEpochs=maxEpochs, verbose=1) ###Output Using cosine annealing Running fold 1 / 10 2 classes found, running in binary mode 1 New best found: 0.491783898685 2 New best found: 0.489882329838 4 New best found: 0.489098813778 6 New best found: 0.488634631575 7 New best found: 0.488416921263 9 New best found: 0.488337600683 12 New best found: 0.488259412228 17 New best found: 0.487956088563 25 New best found: 0.487951367031 34 New best found: 0.487871680217 40 New best found: 0.487781575373 45 New best found: 0.487685398584 54 New best found: 0.487383095077 Early stopping after 64 epochs Score is: {'loss': 0.4873830950772872, 'AUC': 0.2109543826873822} Fold took 214.952s Running fold 2 / 10 1 New best found: 0.497623806903 2 New best found: 0.493301805792 3 New best found: 0.492153657788 5 New best found: 0.492008779067 6 New best found: 0.491769385982 7 New best found: 0.490931321265 15 New best found: 0.490802199578 16 New best found: 0.490793260434 17 New best found: 0.490746660901 25 New best found: 0.490614975571 27 New best found: 0.490329471668 30 New best found: 0.490254371237 36 New best found: 0.490228377432 38 New best found: 0.4900220003 46 New best found: 0.490004517649 Early stopping after 56 epochs Score is: {'loss': 0.49000451764887754, 'AUC': 0.21377706845286948} Fold took 202.044s Running fold 3 / 10 ###Markdown Construct ensemble ###Code with open('train_weights/resultsFile.pkl', 'r') as fin: results = pickle.load(fin) ensemble, weights = assembleEnsemble(results, ensembleSize, ensembleMode, compileArgs) ###Output _____no_output_____ ###Markdown Response on development data ###Code batchEnsemblePredict(ensemble, weights, trainData, ensembleSize=10, verbose=1) print 'Training ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', trainData), getFeature('pred', trainData)), roc_auc_score(getFeature('targets', trainData), getFeature('pred', trainData), sample_weight=getFeature('weights', trainData))) ###Output _____no_output_____ ###Markdown Response on val data ###Code batchEnsemblePredict(ensemble, weights, valData, ensembleSize=10, verbose=1) print 'Testing ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', valData), getFeature('pred', valData)), roc_auc_score(getFeature('targets', valData), getFeature('pred', valData), sample_weight=getFeature('weights', valData))) ###Output _____no_output_____ ###Markdown Evaluation Import in dataframe ###Code def convertToDF(datafile, columns={'gen_target', 'gen_weight', 'pred_class'}, nLoad=10): data = pandas.DataFrame() data['gen_target'] = getFeature('targets', datafile, nLoad) data['pred_class'] = getFeature('pred', datafile, nLoad) print len(data), "candidates loaded" return data valData = convertToDF(valData) sigVal = (valData.gen_target == 1) bkgVal = (valData.gen_target == 0) ###Output _____no_output_____ ###Markdown MVA distributions ###Code getClassPredPlot([valData[bkgVal], valData[sigVal]]) ###Output _____no_output_____
doc/examples/conversion_reaction.ipynb
###Markdown Ordinary differential equations: Conversion reaction==================================================== ###Code This example was kindly contributed by Lukas Sandmeir and Elba Raimundez. It can be downloaded here: :download:`Ordinary Differential Equations <conversion_reaction.ipynb>`. ###Output _____no_output_____ ###Markdown **Note:** Before you use pyABC to parametrize your ODE, please be aware of potential errors introduced by inadequately representing the data generation process, see also the "Measurement noise assessment" notebook. For deterministic models, there are often more efficient alternatives to ABC, check out for example our tool pyPESTO. This example provides a model for the interconversion of two species ($X_1$ and $X_2$) following first-order mass action kinetics with the parameters $\Theta_1$ and $\Theta_2$ respectively:$$ X_1 \rightarrow X_2, \quad\text{rate} = \Theta_1 \cdot [X_1]$$$$ X_2 \rightarrow X_1, \quad\text{rate} = \Theta_2 \cdot [X_2]$$Measurement of $[X_2]$ is provided as $Y = [X_2]$.We will show how to estimate $\Theta_1$ and $\Theta_2$ using pyABC. ###Code # install if not done yet !pip install pyabc --quiet %matplotlib inline from pyabc import (ABCSMC, RV, Distribution, MedianEpsilon, LocalTransition) from pyabc.visualization import plot_kde_2d, plot_data_callback import matplotlib.pyplot as plt import os import tempfile import numpy as np import scipy as sp db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "test.db")) ###Output _____no_output_____ ###Markdown Data----We use an artificial data set which consists of a vector of time points $t$and a measurement vector $Y$. This data was created using the parameter values which are assigned to $\Theta_{\text{true}}$ and by adding normaly distributed measurement noise with variance $\sigma^2 = 0.015^2$. ODE model---------$$ \begin{align*} \frac{dX_1}{dt} &= -\Theta_1 \cdot X_1 + \Theta_2 \cdot X_2\\ \frac{dX_2}{dt} &= \Theta_1 \cdot X_1 - \Theta_2 \cdot X_2 \end{align*}$$ Define the true parameters ###Code theta1_true, theta2_true = np.exp([-2.5, -2]) ###Output _____no_output_____ ###Markdown and the measurement data ###Code measurement_data = np.array([0.0244, 0.0842, 0.1208, 0.1724, 0.2315, 0.2634, 0.2831, 0.3084, 0.3079, 0.3097, 0.3324]) ###Output _____no_output_____ ###Markdown as well as the time points at whith to evaluate ###Code measurement_times = np.arange(len(measurement_data)) ###Output _____no_output_____ ###Markdown and the initial conditions for $X_1$ and $X_2$ ###Code init = np.array([1, 0]) ###Output _____no_output_____ ###Markdown Define the ODE model ###Code def f(y, t0, theta1, theta2): x1, x2 = y dx1 = - theta1 * x1 + theta2 * x2 dx2 = theta1 * x1 - theta2 * x2 return dx1, dx2 def model(pars): sol = sp.integrate.odeint( f, init, measurement_times, args=(pars["theta1"],pars["theta2"])) return {"X_2": sol[:,1]} ###Output _____no_output_____ ###Markdown Integration of the ODE model for the true parameter values ###Code true_trajectory = model({"theta1": theta1_true, "theta2": theta2_true})["X_2"] ###Output _____no_output_____ ###Markdown Let's visualize the results ###Code plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: True parameters fit') plt.legend() plt.show() def distance(simulation, data): return np.absolute(data["X_2"] - simulation["X_2"]).sum() ###Output _____no_output_____ ###Markdown Define the prior for $\Theta_1$ and $\Theta_2$ ###Code parameter_prior = Distribution(theta1=RV("uniform", 0, 1), theta2=RV("uniform", 0, 1)) parameter_prior.get_parameter_names() abc = ABCSMC(models=model, parameter_priors=parameter_prior, distance_function=distance, population_size=50, transitions=LocalTransition(k_fraction=.3), eps=MedianEpsilon(500, median_multiplier=0.7)) abc.new(db_path, {"X_2": measurement_data}); h = abc.run(minimum_epsilon=0.1, max_nr_populations=5) ###Output INFO:ABC:t: 0, eps: 500. INFO:ABC:Acceptance rate: 50 / 53 = 9.4340e-01, ESS=5.0000e+01. INFO:ABC:t: 1, eps: 1.5042107753358946. INFO:ABC:Acceptance rate: 50 / 137 = 3.6496e-01, ESS=3.4582e+01. INFO:ABC:t: 2, eps: 0.6016110594343894. INFO:ABC:Acceptance rate: 50 / 206 = 2.4272e-01, ESS=3.2066e+01. INFO:ABC:t: 3, eps: 0.36455456951279086. INFO:ABC:Acceptance rate: 50 / 390 = 1.2821e-01, ESS=4.8250e+01. INFO:ABC:t: 4, eps: 0.19779626599870773. INFO:ABC:Acceptance rate: 50 / 251 = 1.9920e-01, ESS=4.2094e+01. INFO:History:Done <ABCSMC(id=1, start_time=2020-05-17 19:15:07.639385, end_time=2020-05-17 19:15:09.697238)> ###Markdown Visualization of the probability density functions for $\Theta_1$ and $\Theta_2$ ###Code fig = plt.figure(figsize=(10,8)) for t in range(h.max_t+1): ax = fig.add_subplot(3, np.ceil(h.max_t / 3), t+1) ax = plot_kde_2d( *h.get_distribution(m=0, t=t), "theta1", "theta2", xmin=0, xmax=1, numx=200, ymin=0, ymax=1, numy=200, ax=ax) ax.scatter([theta1_true], [theta2_true], color="C1", label='$\Theta$ true = {:.3f}, {:.3f}'.format( theta1_true, theta2_true)) ax.set_title("Posterior t={}".format(t)) ax.legend() fig.tight_layout() ###Output _____no_output_____ ###Markdown We can also plot the simulated trajectories: ###Code _, ax = plt.subplots() def plot_data(sum_stat, weight, ax, **kwargs): """Plot a single trajectory""" ax.plot(measurement_times, sum_stat['X_2'], color='grey', alpha=0.1) def plot_mean(sum_stats, weights, ax, **kwargs): """Plot mean over all samples""" weights = np.array(weights) weights /= weights.sum() data = np.array([sum_stat['X_2'] for sum_stat in sum_stats]) mean = (data * weights.reshape((-1, 1))).sum(axis=0) ax.plot(measurement_times, mean, color='C2', label='Sample mean') ax = plot_data_callback(h, plot_data, plot_mean, ax=ax) plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: Simulated data fit') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Ordinary Differential Equations: Conversion Reaction============================ ###Code This example was kindly contributed by Lukas Sandmeir and Elba Raimundez. It can be downloaded here: :download:`Ordinary Differential Equations <conversion_reaction.ipynb>`. ###Output _____no_output_____ ###Markdown This example provides a model for the interconversion of two species ($X_1$ and $X_2$) following first-order mass action kinetics with the parameters $\Theta_1$ and $\Theta_2$ respectively:$$ X_1 \rightarrow X_2, \quad\text{rate} = \Theta_1 \cdot [X_1]$$$$ X_2 \rightarrow X_1, \quad\text{rate} = \Theta_2 \cdot [X_2]$$Measurement of $[X_2]$ is provided as $Y = [X_2]$.We will show how to estimate $\Theta_1$ and $\Theta_2$ using pyABC. ###Code %matplotlib inline from pyabc import (ABCSMC, RV, Distribution, MedianEpsilon, LocalTransition) from pyabc.visualization import plot_kde_2d import matplotlib.pyplot as plt import os import tempfile import scipy as sp db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "test.db")) ###Output _____no_output_____ ###Markdown Data----We use an artificial data set which consists of a vector of time points $t$and a measurement vector $Y$. This data was created using the parameter values which are assigned to $\Theta_{\text{true}}$ and by adding normaly distributed measurement noise with variance $\sigma^2 = 0.015^2$. ODE model---------$$ \begin{align*} \frac{dX_1}{dt} &= -\Theta_1 \cdot X_1 + \Theta_2 \cdot X_2\\ \frac{dX_2}{dt} &= \Theta_1 \cdot X_1 - \Theta_2 \cdot X_2 \end{align*}$$ Define the true parameters ###Code theta1_true, theta2_true = sp.exp([-2.5, -2]) ###Output _____no_output_____ ###Markdown and the measurement data ###Code measurement_data = sp.array([0.0244, 0.0842, 0.1208, 0.1724, 0.2315, 0.2634, 0.2831, 0.3084, 0.3079, 0.3097, 0.3324]) ###Output _____no_output_____ ###Markdown as well as the time points at whith to evaluate ###Code measurement_times = sp.arange(len(measurement_data)) ###Output _____no_output_____ ###Markdown and the initial conditions for $X_1$ and $X_2$ ###Code init = sp.array([1, 0]) ###Output _____no_output_____ ###Markdown Define the ODE model ###Code def f(y, t0, theta1, theta2): x1, x2 = y dx1 = - theta1 * x1 + theta2 * x2 dx2 = theta1 * x1 - theta2 * x2 return dx1, dx2 def model(pars): sol = sp.integrate.odeint( f, init, measurement_times, args=(pars["theta1"],pars["theta2"])) return {"X_2": sol[:,1]} ###Output _____no_output_____ ###Markdown Integration of the ODE model for the true parameter values ###Code true_trajectory = model({"theta1": theta1_true, "theta2": theta2_true})["X_2"] ###Output _____no_output_____ ###Markdown Let's visualize the results ###Code plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: True parameters fit') plt.legend() plt.show() def distance(simulation, data): return sp.absolute(data["X_2"] - simulation["X_2"]).sum() ###Output _____no_output_____ ###Markdown Define the prior for $\Theta_1$ and $\Theta_2$ ###Code parameter_prior = Distribution(theta1=RV("uniform", 0, 1), theta2=RV("uniform", 0, 1)) parameter_prior.get_parameter_names() abc = ABCSMC(models=model, parameter_priors=parameter_prior, distance_function=distance, population_size=50, transitions=LocalTransition(k_fraction=.3), eps=MedianEpsilon(500, median_multiplier=0.7)) abc.new(db_path, {"X_2": measurement_data}); h = abc.run(minimum_epsilon=0.1, max_nr_populations=5) ###Output INFO:ABC:t:0 eps:500 INFO:ABC:t:1 eps:1.393207770332699 INFO:ABC:t:2 eps:0.5522857161704047 INFO:ABC:t:3 eps:0.29564846797006095 INFO:ABC:t:4 eps:0.1617727948665631 INFO:History:Done <ABCSMC(id=1, start_time=2018-05-08 15:22:45.838120, end_time=2018-05-08 15:22:50.357498)> ###Markdown Visualization of the probability density functions for $\Theta_1$ and $\Theta_2$ ###Code for t in range(h.max_t+1): ax = plot_kde_2d(*h.get_distribution(m=0, t=t), "theta1", "theta2", xmin=0, xmax=1, numx=300, ymin=0, ymax=1, numy=300) ax.scatter([theta1_true], [theta2_true], color="C1", label='$\Theta$ true = {:.3f}, {:.3f}'.format( theta1_true, theta2_true)) ax.set_title("Posterior t={}".format(t)) ax.legend() ###Output _____no_output_____ ###Markdown Ordinary Differential Equations: Conversion Reaction============================ ###Code This example was kindly contributed by Lukas Sandmeir and Elba Raimundez. It can be downloaded here: :download:`Ordinary Differential Equations <conversion_reaction.ipynb>`. ###Output _____no_output_____ ###Markdown This example provides a model for the interconversion of two species ($X_1$ and $X_2$) following first-order mass action kinetics with the parameters $\Theta_1$ and $\Theta_2$ respectively:$$ X_1 \rightarrow X_2, \quad\text{rate} = \Theta_1 \cdot [X_1]$$$$ X_2 \rightarrow X_1, \quad\text{rate} = \Theta_2 \cdot [X_2]$$Measurement of $[X_2]$ is provided as $Y = [X_2]$.We will show how to estimate $\Theta_1$ and $\Theta_2$ using pyABC. ###Code %matplotlib inline from pyabc import (ABCSMC, RV, Distribution, MedianEpsilon, LocalTransition) from pyabc.visualization import plot_kde_2d, plot_data_callback import matplotlib.pyplot as plt import os import tempfile import numpy as np import scipy as sp db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "test.db")) ###Output _____no_output_____ ###Markdown Data----We use an artificial data set which consists of a vector of time points $t$and a measurement vector $Y$. This data was created using the parameter values which are assigned to $\Theta_{\text{true}}$ and by adding normaly distributed measurement noise with variance $\sigma^2 = 0.015^2$. ODE model---------$$ \begin{align*} \frac{dX_1}{dt} &= -\Theta_1 \cdot X_1 + \Theta_2 \cdot X_2\\ \frac{dX_2}{dt} &= \Theta_1 \cdot X_1 - \Theta_2 \cdot X_2 \end{align*}$$ Define the true parameters ###Code theta1_true, theta2_true = np.exp([-2.5, -2]) ###Output _____no_output_____ ###Markdown and the measurement data ###Code measurement_data = np.array([0.0244, 0.0842, 0.1208, 0.1724, 0.2315, 0.2634, 0.2831, 0.3084, 0.3079, 0.3097, 0.3324]) ###Output _____no_output_____ ###Markdown as well as the time points at whith to evaluate ###Code measurement_times = np.arange(len(measurement_data)) ###Output _____no_output_____ ###Markdown and the initial conditions for $X_1$ and $X_2$ ###Code init = np.array([1, 0]) ###Output _____no_output_____ ###Markdown Define the ODE model ###Code def f(y, t0, theta1, theta2): x1, x2 = y dx1 = - theta1 * x1 + theta2 * x2 dx2 = theta1 * x1 - theta2 * x2 return dx1, dx2 def model(pars): sol = sp.integrate.odeint( f, init, measurement_times, args=(pars["theta1"],pars["theta2"])) return {"X_2": sol[:,1]} ###Output _____no_output_____ ###Markdown Integration of the ODE model for the true parameter values ###Code true_trajectory = model({"theta1": theta1_true, "theta2": theta2_true})["X_2"] ###Output _____no_output_____ ###Markdown Let's visualize the results ###Code plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: True parameters fit') plt.legend() plt.show() def distance(simulation, data): return np.absolute(data["X_2"] - simulation["X_2"]).sum() ###Output _____no_output_____ ###Markdown Define the prior for $\Theta_1$ and $\Theta_2$ ###Code parameter_prior = Distribution(theta1=RV("uniform", 0, 1), theta2=RV("uniform", 0, 1)) parameter_prior.get_parameter_names() abc = ABCSMC(models=model, parameter_priors=parameter_prior, distance_function=distance, population_size=50, transitions=LocalTransition(k_fraction=.3), eps=MedianEpsilon(500, median_multiplier=0.7)) abc.new(db_path, {"X_2": measurement_data}); h = abc.run(minimum_epsilon=0.1, max_nr_populations=5) ###Output INFO:ABC:t: 0, eps: 500. INFO:ABC:Acceptance rate: 50 / 53 = 9.4340e-01, ESS=5.0000e+01. INFO:ABC:t: 1, eps: 1.5042107753358946. INFO:ABC:Acceptance rate: 50 / 137 = 3.6496e-01, ESS=3.4582e+01. INFO:ABC:t: 2, eps: 0.6016110594343894. INFO:ABC:Acceptance rate: 50 / 206 = 2.4272e-01, ESS=3.2066e+01. INFO:ABC:t: 3, eps: 0.36455456951279086. INFO:ABC:Acceptance rate: 50 / 390 = 1.2821e-01, ESS=4.8250e+01. INFO:ABC:t: 4, eps: 0.19779626599870773. INFO:ABC:Acceptance rate: 50 / 251 = 1.9920e-01, ESS=4.2094e+01. INFO:History:Done <ABCSMC(id=1, start_time=2020-05-17 19:15:07.639385, end_time=2020-05-17 19:15:09.697238)> ###Markdown Visualization of the probability density functions for $\Theta_1$ and $\Theta_2$ ###Code fig = plt.figure(figsize=(10,8)) for t in range(h.max_t+1): ax = fig.add_subplot(3, np.ceil(h.max_t / 3), t+1) ax = plot_kde_2d( *h.get_distribution(m=0, t=t), "theta1", "theta2", xmin=0, xmax=1, numx=200, ymin=0, ymax=1, numy=200, ax=ax) ax.scatter([theta1_true], [theta2_true], color="C1", label='$\Theta$ true = {:.3f}, {:.3f}'.format( theta1_true, theta2_true)) ax.set_title("Posterior t={}".format(t)) ax.legend() fig.tight_layout() ###Output _____no_output_____ ###Markdown We can also plot the simulated trajectories: ###Code _, ax = plt.subplots() def plot_data(sum_stat, weight, ax, **kwargs): """Plot a single trajectory""" ax.plot(measurement_times, sum_stat['X_2'], color='grey', alpha=0.1) def plot_mean(sum_stats, weights, ax, **kwargs): """Plot mean over all samples""" weights = np.array(weights) weights /= weights.sum() data = np.array([sum_stat['X_2'] for sum_stat in sum_stats]) mean = (data * weights.reshape((-1, 1))).sum(axis=0) ax.plot(measurement_times, mean, color='C2', label='Sample mean') ax = plot_data_callback(h, plot_data, plot_mean, ax=ax) plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: Simulated data fit') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Ordinary Differential Equations: Conversion Reaction============================ ###Code This example was kindly contributed by Lukas Sandmeir and Elba Raimundez. It can be downloaded here: :download:`Ordinary Differential Equations <conversion_reaction.ipynb>`. ###Output _____no_output_____ ###Markdown This example provides a model for the interconversion of two species ($X_1$ and $X_2$) following first-order mass action kinetics with the parameters $\Theta_1$ and $\Theta_2$ respectively:$$ X_1 \rightarrow X_2, \quad\text{rate} = \Theta_1 \cdot [X_1]$$$$ X_2 \rightarrow X_1, \quad\text{rate} = \Theta_2 \cdot [X_2]$$Measurement of $[X_2]$ is provided as $Y = [X_2]$.We will show how to estimate $\Theta_1$ and $\Theta_2$ using pyABC. ###Code %matplotlib inline from pyabc import (ABCSMC, RV, Distribution, MedianEpsilon, LocalTransition) from pyabc.visualization import plot_kde_2d, plot_data_callback import matplotlib.pyplot as plt import os import tempfile import numpy as np import scipy as sp db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "test.db")) ###Output _____no_output_____ ###Markdown Data----We use an artificial data set which consists of a vector of time points $t$and a measurement vector $Y$. This data was created using the parameter values which are assigned to $\Theta_{\text{true}}$ and by adding normaly distributed measurement noise with variance $\sigma^2 = 0.015^2$. ODE model---------$$ \begin{align*} \frac{dX_1}{dt} &= -\Theta_1 \cdot X_1 + \Theta_2 \cdot X_2\\ \frac{dX_2}{dt} &= \Theta_1 \cdot X_1 - \Theta_2 \cdot X_2 \end{align*}$$ Define the true parameters ###Code theta1_true, theta2_true = sp.exp([-2.5, -2]) ###Output _____no_output_____ ###Markdown and the measurement data ###Code measurement_data = sp.array([0.0244, 0.0842, 0.1208, 0.1724, 0.2315, 0.2634, 0.2831, 0.3084, 0.3079, 0.3097, 0.3324]) ###Output _____no_output_____ ###Markdown as well as the time points at whith to evaluate ###Code measurement_times = sp.arange(len(measurement_data)) ###Output _____no_output_____ ###Markdown and the initial conditions for $X_1$ and $X_2$ ###Code init = sp.array([1, 0]) ###Output _____no_output_____ ###Markdown Define the ODE model ###Code def f(y, t0, theta1, theta2): x1, x2 = y dx1 = - theta1 * x1 + theta2 * x2 dx2 = theta1 * x1 - theta2 * x2 return dx1, dx2 def model(pars): sol = sp.integrate.odeint( f, init, measurement_times, args=(pars["theta1"],pars["theta2"])) return {"X_2": sol[:,1]} ###Output _____no_output_____ ###Markdown Integration of the ODE model for the true parameter values ###Code true_trajectory = model({"theta1": theta1_true, "theta2": theta2_true})["X_2"] ###Output _____no_output_____ ###Markdown Let's visualize the results ###Code plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: True parameters fit') plt.legend() plt.show() def distance(simulation, data): return sp.absolute(data["X_2"] - simulation["X_2"]).sum() ###Output _____no_output_____ ###Markdown Define the prior for $\Theta_1$ and $\Theta_2$ ###Code parameter_prior = Distribution(theta1=RV("uniform", 0, 1), theta2=RV("uniform", 0, 1)) parameter_prior.get_parameter_names() abc = ABCSMC(models=model, parameter_priors=parameter_prior, distance_function=distance, population_size=50, transitions=LocalTransition(k_fraction=.3), eps=MedianEpsilon(500, median_multiplier=0.7)) abc.new(db_path, {"X_2": measurement_data}); h = abc.run(minimum_epsilon=0.1, max_nr_populations=5) ###Output INFO:ABC:t: 0, eps: 500. INFO:ABC:Acceptance rate: 50 / 53 = 9.4340e-01, ESS=5.0000e+01. INFO:ABC:t: 1, eps: 1.6596361778016244. INFO:ABC:Acceptance rate: 50 / 158 = 3.1646e-01, ESS=3.5010e+01. INFO:ABC:t: 2, eps: 0.6688841550835175. INFO:ABC:Acceptance rate: 50 / 214 = 2.3364e-01, ESS=4.9202e+01. INFO:ABC:t: 3, eps: 0.38862905204343706. INFO:ABC:Acceptance rate: 50 / 765 = 6.5359e-02, ESS=3.9090e+01. INFO:ABC:t: 4, eps: 0.22251566133501083. INFO:ABC:Acceptance rate: 50 / 258 = 1.9380e-01, ESS=4.8035e+01. INFO:History:Done <ABCSMC(id=8, start_time=2019-12-17 08:01:09.127673, end_time=2019-12-17 08:01:11.687160)> ###Markdown Visualization of the probability density functions for $\Theta_1$ and $\Theta_2$ ###Code fig = plt.figure(figsize=(10,8)) for t in range(h.max_t+1): ax = fig.add_subplot(3, np.ceil(h.max_t / 3), t+1) ax = plot_kde_2d( *h.get_distribution(m=0, t=t), "theta1", "theta2", xmin=0, xmax=1, numx=200, ymin=0, ymax=1, numy=200, ax=ax) ax.scatter([theta1_true], [theta2_true], color="C1", label='$\Theta$ true = {:.3f}, {:.3f}'.format( theta1_true, theta2_true)) ax.set_title("Posterior t={}".format(t)) ax.legend() fig.tight_layout() ###Output _____no_output_____ ###Markdown We can also plot the simulated trajectories: ###Code _, ax = plt.subplots() def plot_data(sum_stat, weight, ax, **kwargs): """Plot a single trajectory""" ax.plot(measurement_times, sum_stat['X_2'], color='grey', alpha=0.1) def plot_mean(sum_stats, weights, ax, **kwargs): """Plot mean over all samples""" weights = np.array(weights) weights /= weights.sum() data = np.array([sum_stat['X_2'] for sum_stat in sum_stats]) mean = (data * weights.reshape((-1, 1))).sum(axis=0) ax.plot(measurement_times, mean, color='C2', label='Sample mean') ax = plot_data_callback(h, plot_data, plot_mean, ax=ax) plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: Simulated data fit') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Ordinary Differential Equations: Conversion Reaction============================ ###Code This example was kindly contributed by Lukas Sandmeir and Elba Raimundez. It can be downloaded here: :download:`Ordinary Differential Equations <conversion_reaction.ipynb>`. ###Output _____no_output_____ ###Markdown **Note:** Before you use pyABC to parametrize your ODE, please be aware of potential errors introduced by inadequately representing the data generation process, see also the "Measurement noise assessment" notebook. For deterministic models, there are often more efficient alternatives to ABC, check out for example our tool pyPESTO. This example provides a model for the interconversion of two species ($X_1$ and $X_2$) following first-order mass action kinetics with the parameters $\Theta_1$ and $\Theta_2$ respectively:$$ X_1 \rightarrow X_2, \quad\text{rate} = \Theta_1 \cdot [X_1]$$$$ X_2 \rightarrow X_1, \quad\text{rate} = \Theta_2 \cdot [X_2]$$Measurement of $[X_2]$ is provided as $Y = [X_2]$.We will show how to estimate $\Theta_1$ and $\Theta_2$ using pyABC. ###Code %matplotlib inline from pyabc import (ABCSMC, RV, Distribution, MedianEpsilon, LocalTransition) from pyabc.visualization import plot_kde_2d, plot_data_callback import matplotlib.pyplot as plt import os import tempfile import numpy as np import scipy as sp db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "test.db")) ###Output _____no_output_____ ###Markdown Data----We use an artificial data set which consists of a vector of time points $t$and a measurement vector $Y$. This data was created using the parameter values which are assigned to $\Theta_{\text{true}}$ and by adding normaly distributed measurement noise with variance $\sigma^2 = 0.015^2$. ODE model---------$$ \begin{align*} \frac{dX_1}{dt} &= -\Theta_1 \cdot X_1 + \Theta_2 \cdot X_2\\ \frac{dX_2}{dt} &= \Theta_1 \cdot X_1 - \Theta_2 \cdot X_2 \end{align*}$$ Define the true parameters ###Code theta1_true, theta2_true = np.exp([-2.5, -2]) ###Output _____no_output_____ ###Markdown and the measurement data ###Code measurement_data = np.array([0.0244, 0.0842, 0.1208, 0.1724, 0.2315, 0.2634, 0.2831, 0.3084, 0.3079, 0.3097, 0.3324]) ###Output _____no_output_____ ###Markdown as well as the time points at whith to evaluate ###Code measurement_times = np.arange(len(measurement_data)) ###Output _____no_output_____ ###Markdown and the initial conditions for $X_1$ and $X_2$ ###Code init = np.array([1, 0]) ###Output _____no_output_____ ###Markdown Define the ODE model ###Code def f(y, t0, theta1, theta2): x1, x2 = y dx1 = - theta1 * x1 + theta2 * x2 dx2 = theta1 * x1 - theta2 * x2 return dx1, dx2 def model(pars): sol = sp.integrate.odeint( f, init, measurement_times, args=(pars["theta1"],pars["theta2"])) return {"X_2": sol[:,1]} ###Output _____no_output_____ ###Markdown Integration of the ODE model for the true parameter values ###Code true_trajectory = model({"theta1": theta1_true, "theta2": theta2_true})["X_2"] ###Output _____no_output_____ ###Markdown Let's visualize the results ###Code plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: True parameters fit') plt.legend() plt.show() def distance(simulation, data): return np.absolute(data["X_2"] - simulation["X_2"]).sum() ###Output _____no_output_____ ###Markdown Define the prior for $\Theta_1$ and $\Theta_2$ ###Code parameter_prior = Distribution(theta1=RV("uniform", 0, 1), theta2=RV("uniform", 0, 1)) parameter_prior.get_parameter_names() abc = ABCSMC(models=model, parameter_priors=parameter_prior, distance_function=distance, population_size=50, transitions=LocalTransition(k_fraction=.3), eps=MedianEpsilon(500, median_multiplier=0.7)) abc.new(db_path, {"X_2": measurement_data}); h = abc.run(minimum_epsilon=0.1, max_nr_populations=5) ###Output INFO:ABC:t: 0, eps: 500. INFO:ABC:Acceptance rate: 50 / 53 = 9.4340e-01, ESS=5.0000e+01. INFO:ABC:t: 1, eps: 1.5042107753358946. INFO:ABC:Acceptance rate: 50 / 137 = 3.6496e-01, ESS=3.4582e+01. INFO:ABC:t: 2, eps: 0.6016110594343894. INFO:ABC:Acceptance rate: 50 / 206 = 2.4272e-01, ESS=3.2066e+01. INFO:ABC:t: 3, eps: 0.36455456951279086. INFO:ABC:Acceptance rate: 50 / 390 = 1.2821e-01, ESS=4.8250e+01. INFO:ABC:t: 4, eps: 0.19779626599870773. INFO:ABC:Acceptance rate: 50 / 251 = 1.9920e-01, ESS=4.2094e+01. INFO:History:Done <ABCSMC(id=1, start_time=2020-05-17 19:15:07.639385, end_time=2020-05-17 19:15:09.697238)> ###Markdown Visualization of the probability density functions for $\Theta_1$ and $\Theta_2$ ###Code fig = plt.figure(figsize=(10,8)) for t in range(h.max_t+1): ax = fig.add_subplot(3, np.ceil(h.max_t / 3), t+1) ax = plot_kde_2d( *h.get_distribution(m=0, t=t), "theta1", "theta2", xmin=0, xmax=1, numx=200, ymin=0, ymax=1, numy=200, ax=ax) ax.scatter([theta1_true], [theta2_true], color="C1", label='$\Theta$ true = {:.3f}, {:.3f}'.format( theta1_true, theta2_true)) ax.set_title("Posterior t={}".format(t)) ax.legend() fig.tight_layout() ###Output _____no_output_____ ###Markdown We can also plot the simulated trajectories: ###Code _, ax = plt.subplots() def plot_data(sum_stat, weight, ax, **kwargs): """Plot a single trajectory""" ax.plot(measurement_times, sum_stat['X_2'], color='grey', alpha=0.1) def plot_mean(sum_stats, weights, ax, **kwargs): """Plot mean over all samples""" weights = np.array(weights) weights /= weights.sum() data = np.array([sum_stat['X_2'] for sum_stat in sum_stats]) mean = (data * weights.reshape((-1, 1))).sum(axis=0) ax.plot(measurement_times, mean, color='C2', label='Sample mean') ax = plot_data_callback(h, plot_data, plot_mean, ax=ax) plt.plot(true_trajectory, color="C0", label='Simulation') plt.scatter(measurement_times, measurement_data, color="C1", label='Data') plt.xlabel('Time $t$') plt.ylabel('Measurement $Y$') plt.title('Conversion reaction: Simulated data fit') plt.legend() plt.show() ###Output _____no_output_____
Python/tigre/demos/d01_create_geometry.ipynb
###Markdown Demo 01: Describing your geometry ###Code In TIGRE the geometry is stored in a class. -------------------------------------------------------------------------- -------------------------------------------------------------------------- This file is part of the TIGRE Toolbox Copyright (c) 2015, University of Bath and CERN-European Organization for Nuclear Research All rights reserved. License: Open Source under BSD. See the full license at https://github.com/CERN/TIGRE/license.txt Contact: [email protected] Codes: https://github.com/CERN/TIGRE/ -------------------------------------------------------------------------- Coded by: MATLAB (original code): Ander Biguri PYTHON : Reuben Lindroos,Sam Loescher To see a demo of what the geometry paramterers should look like, do as follows: ###Output _____no_output_____ ###Markdown import tigregeo = tigre.geometry_default(high_quality = False)print(geo) ###Code Geometry definition Detector plane, behind |-----------------------------| | | | | | | Centered | | at O A V +--------+ | | / /| | A Z | / / |*D | | | +--------+ | | | | | | | | | | | *O | + | *--->y | | | / | / | | |/ | V X | +--------+ U | .--------------------->-------| *S ###Output _____no_output_____ ###Markdown We recommend using the template below and defining you're class as such: ###Code from __future__ import division import numpy as np class TIGREParameters: def __init__(self, high_quality=True): if high_quality: # VARIABLE DESCRIPTION UNITS # ------------------------------------------------------------------------------------- self.DSD = 1536 # Distance Source Detector (mm) self.DSO = 1000 # Distance Source Origin (mm) # Detector parameters self.nDetector = np.array((512, 512)) # number of pixels (px) self.dDetector = np.array((0.8, 0.8)) # size of each pixel (mm) self.sDetector = self.nDetector * self.dDetector # total size of the detector (mm) # Image parameters self.nVoxel = np.array((256, 256, 256)) # number of voxels (vx) self.sVoxel = np.array((256, 256, 256)) # total size of the image (mm) self.dVoxel = self.sVoxel/self.nVoxel # size of each voxel (mm) # Offsets self.offOrigin = np.array((0, 0, 0)) # Offset of image from origin (mm) self.offDetector = np.array((0, 0)) # Offset of Detector (mm) # Auxiliary self.accuracy = 0.5 # Accuracy of FWD proj (vx/sample) # Mode self.mode = 'cone' # parallel, cone ... else: # VARIABLE DESCRIPTION UNITS # ------------------------------------------------------------------------------------- self.DSD = 1536 # Distance Source Detector (mm) self.DSO = 1000 # Distance Source Origin (mm) # Detector parameters self.nDetector = np.array((128, 128)) # number of pixels (px) self.dDetector = np.array((0.8, 0.8))*4 # size of each pixel (mm) self.sDetector = self.nDetector * self.dDetector # total size of the detector (mm) # Image parameters self.nVoxel = np.array((64, 64 , 64)) # number of voxels (vx) self.sVoxel = np.array((256, 256, 256)) # total size of the image (mm) self.dVoxel = self.sVoxel / self.nVoxel # size of each voxel (mm) # Offsets self.offOrigin = np.array((0, 0, 0)) # Offset of image from origin (mm) self.offDetector = np.array((0, 0)) # Offset of Detector (mm) # Auxiliary self.accuracy = 0.5 # Accuracy of FWD proj (vx/sample) # Mode self.mode=None # parallel, cone ... self.filter=None ###Output _____no_output_____ ###Markdown Demo 01: Describing your geometry To see a demo of what the geometry paramterers should look like, do as follows: ###Code import tigre geo = tigre.geometry_default(high_quality = False) print(geo) ###Output TIGRE parameters ----- Geometry parameters Distance from source to detector (DSD) = 1536 mm Distance from source to origin (DSO)= 1000 mm ----- Detector parameters Number of pixels (nDetector) = [128 128] Size of each pixel (dDetector) = [3.2 3.2] mm Total size of the detector (sDetector) = [409.6 409.6] mm ----- Image parameters Number of voxels (nVoxel) = [64 64 64] Total size of the image (sVoxel) = [256 256 256] mm Size of each voxel (dVoxel) = [4. 4. 4.] mm ----- Offset correction parameters Offset of image from origin (offOrigin) = [0 0 0] mm Offset of detector (offDetector) = [0 0] mm ----- Auxillary parameters Samples per pixel of forward projection (accuracy) = 0.5 ----- Rotation of the Detector (rotDetector) = [0 0 0] rad ###Markdown We recommend using the template below and defining you're class as such: ###Code from __future__ import division import numpy as np class Geometry: def __init__(self, high_quality=True): if high_quality: # VARIABLE DESCRIPTION UNITS # ------------------------------------------------------------------------------------- self.DSD = 1536 # Distance Source Detector (mm) self.DSO = 1000 # Distance Source Origin (mm) # Detector parameters self.nDetector = np.array((512, 512)) # number of pixels (px) self.dDetector = np.array((0.8, 0.8)) # size of each pixel (mm) self.sDetector = self.nDetector * self.dDetector # total size of the detector (mm) # Image parameters self.nVoxel = np.array((256, 256, 256)) # number of voxels (vx) self.sVoxel = np.array((256, 256, 256)) # total size of the image (mm) self.dVoxel = self.sVoxel/self.nVoxel # size of each voxel (mm) # Offsets self.offOrigin = np.array((0, 0, 0)) # Offset of image from origin (mm) self.offDetector = np.array((0, 0)) # Offset of Detector (mm) # Auxiliary self.accuracy = 0.5 # Accuracy of FWD proj (vx/sample) # Mode self.mode = 'cone' # parallel, cone ... else: # VARIABLE DESCRIPTION UNITS # ------------------------------------------------------------------------------------- self.DSD = 1536 # Distance Source Detector (mm) self.DSO = 1000 # Distance Source Origin (mm) # Detector parameters self.nDetector = np.array((128, 128)) # number of pixels (px) self.dDetector = np.array((0.8, 0.8))*4 # size of each pixel (mm) self.sDetector = self.nDetector * self.dDetector # total size of the detector (mm) # Image parameters self.nVoxel = np.array((64, 64 , 64)) # number of voxels (vx) self.sVoxel = np.array((256, 256, 256)) # total size of the image (mm) self.dVoxel = self.sVoxel / self.nVoxel # size of each voxel (mm) # Offsets self.offOrigin = np.array((0, 0, 0)) # Offset of image from origin (mm) self.offDetector = np.array((0, 0)) # Offset of Detector (mm) # Auxiliary self.accuracy = 0.5 # Accuracy of FWD proj (vx/sample) # Mode self.mode=None # parallel, cone ... self.filter=None ###Output _____no_output_____ ###Markdown Demo 01: Describing your geometry ###Code In TIGRE the geometry is stored in a class. -------------------------------------------------------------------------- -------------------------------------------------------------------------- This file is part of the TIGRE Toolbox Copyright (c) 2015, University of Bath and CERN-European Organization for Nuclear Research All rights reserved. License: Open Source under BSD. See the full license at https://github.com/CERN/TIGRE/license.txt Contact: [email protected] Codes: https://github.com/CERN/TIGRE/ -------------------------------------------------------------------------- Coded by: MATLAB (original code): Ander Biguri PYTHON : Reuben Lindroos,Sam Loescher To see a demo of what the geometry paramterers should look like, do as follows: ###Output _____no_output_____ ###Markdown import tigregeo = tigre.geometry_default(high_quality = False)print(geo) ###Code Geometry definition Detector plane, behind |-----------------------------| | | | | | | Centered | | at O A V +--------+ | | / /| | A Z | / / |*D | | | +--------+ | | | | | | | | | | | *O | + | *--->y | | | / | / | | |/ | V X | +--------+ U | .--------------------->-------| *S ###Output _____no_output_____ ###Markdown We recommend using the template below and defining you're class as such: ###Code from __future__ import division import numpy as np class TIGREParameters: def __init__(self, high_quality=True): if high_quality: # VARIABLE DESCRIPTION UNITS # ------------------------------------------------------------------------------------- self.DSD = 1536 # Distance Source Detector (mm) self.DSO = 1000 # Distance Source Origin (mm) # Detector parameters self.nDetector = np.array((512, 512)) # number of pixels (px) self.dDetector = np.array((0.8, 0.8)) # size of each pixel (mm) self.sDetector = self.nDetector * self.dDetector # total size of the detector (mm) # Image parameters self.nVoxel = np.array((256, 256, 256)) # number of voxels (vx) self.sVoxel = np.array((256, 256, 256)) # total size of the image (mm) self.dVoxel = self.sVoxel/self.nVoxel # size of each voxel (mm) # Offsets self.offOrigin = np.array((0, 0, 0)) # Offset of image from origin (mm) self.offDetector = np.array((0, 0)) # Offset of Detector (mm) # Auxiliary self.accuracy = 0.5 # Accuracy of FWD proj (vx/sample) # Mode self.mode = 'cone' # parallel, cone ... else: # VARIABLE DESCRIPTION UNITS # ------------------------------------------------------------------------------------- self.DSD = 1536 # Distance Source Detector (mm) self.DSO = 1000 # Distance Source Origin (mm) # Detector parameters self.nDetector = np.array((128, 128)) # number of pixels (px) self.dDetector = np.array((0.8, 0.8))*4 # size of each pixel (mm) self.sDetector = self.nDetector * self.dDetector # total size of the detector (mm) # Image parameters self.nVoxel = np.array((64, 64 , 64)) # number of voxels (vx) self.sVoxel = np.array((256, 256, 256)) # total size of the image (mm) self.dVoxel = self.sVoxel / self.nVoxel # size of each voxel (mm) # Offsets self.offOrigin = np.array((0, 0, 0)) # Offset of image from origin (mm) self.offDetector = np.array((0, 0)) # Offset of Detector (mm) # Auxiliary self.accuracy = 0.5 # Accuracy of FWD proj (vx/sample) # Mode self.mode=None # parallel, cone ... self.filter=None ###Output _____no_output_____
ICP1/Part 4 - Fashion-MNIST (Exercises).ipynb
###Markdown Classifying Fashion-MNISTNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.First off, let's load the dataset through torchvision. ###Code import torch from torchvision import datasets, transforms import helper from torch import nn from torch import optim import torch.nn.functional as F # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ###Output _____no_output_____ ###Markdown Here we can see one of the images. ###Code image, label = next(iter(trainloader)) helper.imshow(image[0,:]); ###Output _____no_output_____ ###Markdown Building the networkHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers. ###Code # TODO: Define your network architecture here model = nn.Sequential(nn.Linear(784,128), nn.ReLU(), nn.Linear(128,64), nn.ReLU(), nn.Linear(64,10)) ###Output _____no_output_____ ###Markdown Train the networkNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.htmlloss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).Then write the training code. Remember the training pass is a fairly straightforward process:* Make a forward pass through the network to get the logits * Use the logits to calculate the loss* Perform a backward pass through the network with `loss.backward()` to calculate the gradients* Take a step with the optimizer to update the weightsBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4. ###Code # TODO: Create the network, define the criterion and optimizer criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.004) # TODO: Train the network here epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: # Flatten MNIST images into a 784 long vector images = images.view(images.shape[0], -1) optimizer.zero_grad() logits = model(images) loss = criterion(logits, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}") import torch from torchvision import datasets, transforms import helper from torch import nn from torch import optim import torch.nn.functional as F # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) # TODO: Define your network architecture here model = nn.Sequential(nn.Linear(784,128), nn.ReLU(), nn.Linear(128,64), nn.ReLU(), nn.Linear(64,10), nn.LogSoftmax(dim=1)) # TODO: Create the network, define the criterion and optimizer criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.004) # TODO: Train the network here epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: # Flatten MNIST images into a 784 long vector images = images.view(images.shape[0], -1) optimizer.zero_grad() logits = model(images) loss = criterion(logits, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}") %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper def softmax(x): """Calculates the softmax""" numerator = torch.exp(x) denominator = numerator.sum(dim=1).view(-1, 1) return numerator/denominator # Test out your network! dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.resize_(1, 784) # TODO: Calculate the class probabilities (softmax) for img ps = torch.exp(model(img)) # Plot the image and probabilities helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion') ###Output _____no_output_____
SoftMaxRegression.ipynb
###Markdown Softmax Regression ###Code class SoftmaxRegression: def softmax(self, x): # shape(#samples, #classes) z = ((np.dot(x, self.weight)) + self.bias) # shape(#samples, #classes) return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T def forward(self, x): # shape(#samples, #classes) return self.softmax(x) def crossEntropy(self, y, y_hat): # shape(#samples, ) return - np.sum(np.log(y_hat) * (y), axis=1) def cost(self, y, y_hat): # scalar return np.mean(self.crossEntropy(y, y_hat)) def train(self, x, y, alpha, epoch, random_state=-1): # x : shape(#samples, #features) # y : shape(#samples, #classes) m, n, c = x.shape[0], x.shape[1], y.shape[1] if random_state != -1: np.random.seed(random_state) # shape(#features, #classes) self.weight = np.random.randn(n,c) # shape(1, #classes) self.bias = np.zeros((1,c)) self.epoch = epoch self.cost_list = [] for i in range(self.epoch): # shape(#samples, #classes) y_hat = self.forward(x) # scalar loss = self.cost(y, y_hat) self.cost_list.append(loss) # Gradient # dL_dw : dLoss/dweight (#features, #classes) dL_dw = (np.dot(x.T, (y_hat - y)))/m # dL_db : dLoss/dbias (1, #classes) dL_db = np.sum((y_hat - y)/m) # shape(#features, #classes) self.weight = self.weight - (alpha * dL_dw) # shape(1, #classes) self.bias = self.bias - (alpha * dL_db) def plot_convergence(self): plt.plot([i for i in range(self.epoch)], self.cost_list) plt.xlabel('Epochs'); plt.ylabel('Cross Entropy') def predict(self, x_test): # shape(#samples, #classes) y_hat = self.forward(x_test) return y_hat.argmax(axis=1) ###Output _____no_output_____ ###Markdown Utils ###Code def train_test_split(x, y, size=0.2, random_state=-1): if random_state != -1: np.random.seed(random_state) x_val = x[:int(len(x)*size)] y_val = y[:int(len(x)*size)] x_train = x[int(len(x)*size):] y_train = y[int(len(x)*size):] return x_train, y_train, x_val, y_val ###Output _____no_output_____ ###Markdown Train ###Code df = pd.read_csv('data/Iris.csv') df.head(2) ###Output _____no_output_____ ###Markdown Data Preparation ###Code df.Species.unique() ###Output _____no_output_____ ###Markdown Convert to numerical ###Code df.Species.replace(('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'), (0, 1, 2), inplace=True) ###Output _____no_output_____ ###Markdown Shuffle data ###Code df = df.sample(frac=1, random_state=0) ###Output _____no_output_____ ###Markdown Convert dataframe to numpy array ###Code X, Y = df.drop(['Species'], axis=1).values, df.Species.values ###Output _____no_output_____ ###Markdown Split ###Code X_train, Y_train, X_val, Y_val = train_test_split(X, Y, size=0.2, random_state=0) y_train = Y_train.copy() Y_train = (np.arange(np.max(Y_train) + 1) == Y_train[:, None]).astype(float) ###Output _____no_output_____ ###Markdown Train ###Code s = SoftmaxRegression() s.train(X_train, Y_train, 0.02, 150, random_state=0) s.plot_convergence() ###Output _____no_output_____ ###Markdown Evaluate on validation data ###Code Y_hat = s.predict(X_val) confusion_matrix(Y_val, Y_hat) print(classification_report(Y_val, Y_hat)) ###Output precision recall f1-score support 0 1.00 1.00 1.00 11 1 1.00 0.77 0.87 13 2 0.67 1.00 0.80 6 accuracy 0.90 30 macro avg 0.89 0.92 0.89 30 weighted avg 0.93 0.90 0.90 30 ###Markdown Train using sklearn Train ###Code lr = LogisticRegression(random_state=0) lr.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Evaluate on validation data ###Code Y_hat = lr.predict(X_val) confusion_matrix(Y_val, Y_hat) print(classification_report(Y_val, Y_hat)) ###Output precision recall f1-score support 0 1.00 1.00 1.00 11 1 1.00 1.00 1.00 13 2 1.00 1.00 1.00 6 accuracy 1.00 30 macro avg 1.00 1.00 1.00 30 weighted avg 1.00 1.00 1.00 30
Training/Tutorial - Gluon MXNet - The Straight Dope Master/proto-P02-C02.6-loss.ipynb
###Markdown Loss FunctionsWhen fitting data to labels we need to measure the degree of goodness of fit. This sounds obvious but isn't quite so straightforward. In fact, there are entire fields of statistics that focus solely on that (e.g. robust statistics). In this notebook we'll discuss a number of ways how we can measure whether our model is doing well. As a side-benefit, we'll get to know the loss function layers in ``gluon``. We begin with our default import ritual. ###Code import mxnet as mx import mxnet.gluon as gluon from mxnet import nd, autograd import matplotlib.pyplot as plt import numpy as np import mxnet.autograd as ag import math mx.random.seed(1) ###Output _____no_output_____ ###Markdown Regression L1 lossAs we discussed in the introduction, regression describes the cases where we want to estimate some real valued number $f(x) \in \mathbb{R}$ to match an observation $y$. A natural idea of measuring the distance would be to compute $|y - f(x)|$. This makes sense, e.g. if we need to estimate how much it might cost to manufacture a product: if we estimate too low, we will incur a loss due to underestimation. If we overprice it, we will sell fewer products (here we're making the unrealistic assumption that both are equally bad). In math, the loss function is$$l(y,f) = |y-f|$$Let's compute it with ``gluon`` and also its gradient. ###Code loss = gluon.loss.L1Loss() # getting data ready output = nd.arange(-5,5,0.01) output.attach_grad() # we need the gradient thelabel = nd.zeros_like(output) with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='L1 loss') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of L1 loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Before we move on to other losses, let's quickly consider what happens if we want to minimize the L1 loss. Consider the toy example where we have a number of labels $y_i$ and we want to fit *all* of them to a single scalar, say $f$. In this case we need to solve the minimization problem:$$\mathop{\mathrm{minimize}}_f \sum_i |y_i - f|$$As we saw above, the gradient is either -1 or 1. Hence, for the gradients to the left and to the right of $f$ to cancel out we need *the same number of $y_i$* on either side. This is the definition of the *median*. Hence, minimizing the L1 loss means that we are computing the median (at least for constant predictions). In general, the L1 loss is very robust against outliers, since the gradients can never get too large. L2 lossTaking the squared distance between observation and estimate tends to be the default choice in many problems. Often for convenience we multiply this loss by a factor of $\frac{1}{2}$ to ensure that the derivatives look pretty. Here's the loss:$$l(y,f) = \frac{1}{2} (y-f)^2$$For vectorial $f$ and $y$ this is the squared Euclidean distance between points. The L2 loss has a few other nice properties. By a similar argument as before we can see that $\sum_{i=1}^m \frac{1}{2} (y_i - f)^2$ is minimized by choosing $f = \frac{1}{m} \sum_{i=1}^m y_i$, i.e. by choosing the mean. Let's see what it looks like in practice. ###Code loss = gluon.loss.L2Loss() with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='L2 loss') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of L2 loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Huber's Robust lossHuber's Robust Loss is a cross between the L1 and the L2 loss. It behaves like an L2 loss close to zero. Beyond that, for discrepancies larger than $\rho$ it behaves like an L1 loss. The scaling is set up in such a way as to ensure that the derivative is continuous. $$l(y,f) = \begin{cases}\frac{1}{2 \rho} (y-f)^2 & \text{ for } |y-f| < \rho \\|y-f| - \frac{\rho}{2} & \text{ otherwise}\end{cases}$$If we minimize the loss something interesting happens (again, we're in the toy scenario that we just estimate a scalar). The number of cases with $y_i f+\rho$ are going to cancel out, since their gradients are all $-1$ and $1$ respectively. For all the $y_i$ closer to $f$, the gradients will balance out like in the L2 loss case. In other words, $f$ will be the mean for all points closer than $\rho$. This is pretty much what a *trimmed mean* estimator does. It ensures that a few outliers (very large $|y_i|$) won't break the estimate. Let's check it out in practice. ###Code loss = gluon.loss.Huber(rho=0.5) with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Huber loss 0.5') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of Huber loss 0.5') # and now for the same loss function with rho=1.0, the default loss = gluon.loss.Huber() with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Huber loss 1') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of Huber loss 1') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Quantile RegressionIn most cases we want to find an output $y$ which is in some way maximal for a given $x$, e.g. the one with the smallest amount of variance, the most likely one, etc. But there are cases where this isn't quite the most desirable thing: imagine that we want to build a tool for physicians to assess whether a child is of normal height. *normal* is obviously relative - relative to age, gender, the ethnic background of the parents, etc. While a good physician might have a good intuition, it would be great if we could quantify this. That is exactly what *quantile regression* does. It aims to estimate some output $f(x)$ such that $\Pr(y \leq f(x)|x) = \tau$ for some quantile $\tau$. This allows us to trace quantile curves for all sorts of probabilities, such as the table below computed by the CDC.![](img/growth-2-20-girls.png)To calculate such a table we can use a skewed loss function. Statisticians often call it a 'pinball loss', since it looks like the levers on a pinball machine. Basically it's an L1 loss that has been tilted to one side or another. $$l(y,f) = \begin{cases}\tau (y-f) & \text{ for } f<y \\(1-\tau) (f-y) & \text{ otherwise}\end{cases}$$Depending on how far we tilt this loss, we end up with a loss function that underweights (small $\tau$) or overweights (large $\tau$) errors on the left or on the right. ###Code loss = gluon.loss.Quantile(tau=0.2) with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Quantile loss 0.2') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of Quantile loss 0.2') # and now for the same loss function with tau = 0.6 loss = gluon.loss.Quantile(tau=0.6) with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Quantile loss 0.6') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of Quantile loss 0.6') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown $\epsilon$ Insensitive LossIn some cases we do not care about small deviations from the truth. More to the point, we do not care about deviations up to $\epsilon$. Beyond that, we might care in a linear fashion. For instance, a screw might have a tolerance of $\epsilon$ and the work to make anything fit beyond that would be linear in the diameter of the screw (yes, it's a contrived example). The associated loss function (desribed in detail by a paper by Vapnik, Golovich and Smola, 1995) is given by:$$l(y,f) = \mathrm{max}(0, |y-f| - \epsilon)$$As you can see, it contains a region $[y-\epsilon, y+\epsilon]$ where the derivative vanishes. Outside that range it is constant. ###Code loss = gluon.loss.EpsilonInsensitive(epsilon=0.5) with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Epsilon-insensitive loss 0.5') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of Epsilon-insensitive loss 0.5') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown LogCosh LossAn obscure variant among loss functions is the LogCosh loss. The key idea is to smooth out the L1 loss such that the loss becomes continuously differentiable even at $0$. This is accomplished by computing the softmax between $y-f$ and $f-y$, i.e. to compute $\log \cosh (y-f)$. The results are exactly as expected. Note that to compute it, we use a numerically stable variant $\log \cosh x = |x| + \log (1+ \exp(-2x))/2$. This ensures that large values of $x$ do not lead to divergent terms. ###Code loss = gluon.loss.LogCosh() with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='LogCosh loss') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of LogCosh loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Poisson In some cases the regression problem does not have to deal with continuous values that could be both positive or negative, but rather with *integer counts*. For instance, the number of rain drops per square meter in a given time, the number of meteorites hitting Antarctica per day, the number of Prussian soldiers that were hit by horses per week, etc. can be useful numbers to estimate. However, it is equally clear that a real valued estimate is useless: we never have 1.3 meteorites. It's only $0, 1, 2, 3, \ldots$ or some other number. Consequently, we need a different loss function. Fortunately the Poisson distribution fits the bill quite well. In it, we assume that $$p(y|f) = \frac{1}{y!} \exp(y f - \exp(f)) \text{ and } l(y,f) = - \log p(y|f).$$In many cases one uses an equivalent formulation with rate parameter $\lambda = \exp(f)$ such that we get$p(y|\lambda) = \frac{1}{y!} \lambda^y e^{-\lambda}$. Note that this is entirely equivalent. The only problem with the $\lambda$-parametrization is that $\lambda$ must be nonnegative, whereas $f$ can assume arbitrary values. **Unlike Keras and PyTorch, Gluon uses the exponential formulation**. By design, the loss function vanishes for $y = \exp(f)$, as can be seen in the graph below (this is one of the reasons why sometimes the $\lambda$ parametrization is preferable). ###Code loss = gluon.loss.Poisson() with ag.record(): # start recording theloss = loss(output, 10 * nd.ones_like(output)) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Poisson loss') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of Poisson loss') plt.legend() plt.show() # this implements an L2 norm triplet loss # max(margin + |f1 - f2|^2 - |f1-f3|^2, 0) per observation def TripletLoss(f1, f2, f3): margin = 1 loss = nd.sum((f1-f2)**2 - (f1-f3)**2, axis=1) + 1 loss = nd.maximum(loss, nd.zeros_like(loss)) return loss loss = TripletLoss #with ag.record(): # start recording # theloss = loss(output, nd.ones_like(output)) #theloss.backward() # and compute the gradient #plt.plot(output.asnumpy(), theloss.asnumpy(), label='Huber Loss') #plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient') #plt.legend() #plt.show() f1 = nd.random_normal(shape=(5,10)) f2 = nd.random_normal(shape=(5,10)) f3 = nd.random_normal(shape=(5,10)) theloss = loss(f1, f2, f3) print(theloss) ###Output [ 7.53516912 0. 0. 0. 6.3003993 ] <NDArray 5 @cpu(0)> ###Markdown Classification Logistic RegressionNext consider the case where we have two labels, say ``cat`` and ``dog``. Since statisticians (and computers) don't like strings, we simplify this to $y \in \{\pm 1\}$. One way of mapping real numbers in $\mathbb{R}$ into class probabilities is to use a sigmoid function.$$p(y|f) = \frac{1}{1 + \exp(-y f)} \text{ and hence } -\log p(y|f) = \log(1 + \exp(-y f))$$*Side remark for math nerds:* To keep the term numerically stable we can rewrite it as $-yf + \log(1 + \exp(yf))$ whenever $yf < 0$. The reason for doing this is to avoid exponentiating a large positive number which would trigger a numerical overflow. Combining both expressions we get the following expression: $\log(1 + \exp(-|yf|)) - \delta(yf < 0) \cdot yf$. As we can see, the probabilities converge to 0 and 1 respectively for extreme values of $f$. ###Code loss = gluon.loss.Logistic() # getting data ready thelabel = nd.ones_like(output) with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Logistic loss') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of logistic loss') # now compute the loss for y=-1 with ag.record(): # start recording theloss = loss(output, -thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Logistic loss for y=-1') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of loss for y=-1') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Soft Margin LossNote that the logistic loss isn't the only loss that one might encounter. For instance, in Support Vector Machines we have a soft-margin loss. It is $0$ whenever data is correctly classified with some confidence, say $y f(x) > 1$. Otherwise we impose a linear penalty. In math this amounts to$$l(y,f) = \mathrm{max}(0, 1- yf)$$In some cases we want to square this loss function. Quite unsurprisingly, the counterpart to `SoftMargin` is called `SquaredSoftMargin`. ###Code loss = gluon.loss.SoftMargin() with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Soft margin') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient') # now compute the loss for y=-1 theloss = loss(output, -thelabel) plt.plot(output.asnumpy(), theloss.asnumpy(), label='Soft margin for y=-1') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Exponential LossIn some cases we *really* want to ensure that things are classified correctly we might replace $\log(1 + \exp(-yf))$ for its exponential counterpart, i.e. $\exp(-yf)$. For instance, AdaBoost can be proven to minimize this loss function when it progressively weighs incorrectly classified data in an exponential way (as an aside, for two loss functions $l_1$ and $l_2$, the gradient $\partial_w l_1(x,f(x))$ and $c \partial_w l_2(x, f(x))$ are identical if $l_1 = c l_2$, hence changing the loss function or reweighting the data are equivalent). No matter, the loss function is available in 'MxNet Gluon' and it implements$$l(y, f) = \exp(-y f)$$ ###Code loss = gluon.loss.Exponential() # getting data ready thelabel = nd.ones_like(output) with ag.record(): # start recording theloss = loss(output, thelabel) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='Logistic loss') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient of logistic loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Langford's VW lossOne of the more unusual loss functions is John Langford's VW style loss. It is essentially a cut variant of Huber's robust loss, and it works by piecing together a linear, quadratic and constant part of a loss function. The benefit of this choice is that its gradient is bounded for significant misclassification, that its gradient vanishes for highly confident classification and that there is a graduation in terms of how poorly classified data is. We have$$l(y,f) = \begin{cases} 0 & \text{ if } 1 < y f \\ \frac{1}{2} f^2 & \text { if } 0 \leq yf \leq 1 \\ \frac{1}{2}-yf & \text{ otherwise}\end{cases}$$ ###Code loss = gluon.loss.Langford() with ag.record(): # start recording theloss = loss(output, nd.ones_like(output)) theloss.backward() # and compute the gradient plt.plot(output.asnumpy(), theloss.asnumpy(), label='VW style loss') plt.plot(output.asnumpy(), output.grad.asnumpy(), label='Gradient') # now compute the loss for y=-1 theloss = loss(output, -thelabel) plt.plot(output.asnumpy(), theloss.asnumpy(), label='VW style loss for y=-1') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Multiclass Classification Multiclass SoftmaxOne way of dealing with multiple classes is to turn it into $n$ binary classification problems. That is, we simply test: 'is it class 1', 'is it class 2', ... 'is it class n'. In theory this sounds like a splendid idea. After all, this should be just as easy as determining which class it is. Unfortunately, that's not quite the case. Imagine the situation where none of the $n$ classifiers wants to take responsibility. Or imagine the case where more than one claims that it's his turn. Obviously there has to be a better way. Indeed, there is.If we have a vector $f \in \mathbb{R}^n$ of scores, where the coordinate, say $f_i$ is large whenever we think that the correct class is $i$, then we can map $f$ into a probability vector via$$p(y=i|f) \propto \exp(f_i) \text{ and hence } p(y=i|f) = \frac{\exp(f_i)}{\sum_j \exp(f_j)}$$Here the normalization by $\sum_j \exp(f_j)$ is needed such that all the terms sum up to 1. Consequently the negative log-likelihood $-\log p(y|f)$, i.e. the quantity that we would want to minimize in this case is given by $$-\log p(y=i|f) = \log \left[\sum_{j} \exp(f_j)\right] - f_i$$In ``gluon`` the relevant function is [mxnet.gluon.loss.SoftmaxCrossEntropyLoss](http://mxnet.io/api/python/gluon.htmlmxnet.gluon.loss.SoftmaxCrossEntropyLoss). Let's check that this is correct. ###Code loss = gluon.loss.SoftmaxCrossEntropyLoss() f = nd.random_normal(shape=(1,10)) y = nd.array([4]) #class 4 is true print('Softmax loss is {}.'.format(loss(f,y).asscalar())) # now compute this by hand p = nd.exp(f) p = p / nd.sum(p) print('Class 4 has negative log-likelihood {}.'.format(-nd.log(p[0,4]).asscalar())) ###Output Softmax loss is 1.7977911233901978. Class 4 has negative log-likelihood 1.7977910041809082. ###Markdown The softmax loss has a rather nice property that is worth pointing out: its gradient is given by the difference between the conditional class probabilities $p(y=i|f)$ and the indicator vector $e_j$. This can be derived via $$\partial_{f_i} \log \sum_j \exp(f_j) = \frac{\exp(f_i)}{\sum_j \exp(f_j)} = p(y=i|f)$$Such a result seems to be too good to be true by chance. In fact, it holds for *every* member of a larger family of distributions, called the [Exponential Family](https://en.wikipedia.org/wiki/Exponential_family). More specifically, the derivative of the associated normalization is the expected value of the associated embedding. MaxMargin LossThe soft-margin loss function allowed us to distinguish between two classes with a margin of separation. That is, as long as $y f(x) \geq 1$ we incurred no loss, whereas for smaller values of the margin (and for misclassifications) a loss is incurred. The obvious question is how to generalize this to more than two classes. One possibility is to treat things as many binary classification problems, but this is a bad idea, since tie-breaking can be tricky. An alternative is to require that the correct class be recognized with a safe margin relative to all the other classes as follows: $f(x,y) \geq f(x,y') + 1$ for all $y' \neq y$. Clearly this would do the trick, and we can design a loss function via$$l(y,f) = \mathrm{max}\left[0, \mathrm{max}_{y' \neq y} \left[f(x,y') - f(x,y) + 1\right]\right]$$This looks awkward since we have two nested maxima (the outer one is needed to ensure that we don't get negative values for our loss function). A cleaner (albeit slightly wasteful) way of writing this out is to define some function $\Delta(y,y')$ where $\Delta(y,y') = 1$ if $y \neq y'$ and $\Delta(y,y) = 0$. This is a 0-1 loss. In this case the above equation can be rewritten as:$$l(y,f) = \mathrm{max}_{y'} \left[f(x,y') - f(x,y) + \Delta(y,y')\right]$$Note that the function $l$ is convex in $f$ (once upon a time when people were using kernels this was a big deal since it meant that the entire optimization problem was convex ...). More importantly for us here is the fact that we now have a parameter, the loss $\Delta$ and an obvious question is what would happen if we changed it a bit. Let's take some intuition from the real world. Assume that you're driving on a road with a steep cliff on one side and an incline on the other. ![](img/road-cliff.jpg)Any sensible driver will try to stay as far away from the cliff while hugging the shoulder that corresponds to the incline. This is the case since mistakes on the incline are much more benign (scratched rims) than those on the steep cliff (likely death). In other words, a good driver will pick a margin between alternatives that is commensurate with the cost of making a mistake. [Taskar, Guestrin and Koller](http://dl.acm.org/citation.cfm?id=2981349) (TKG) in 2003 realized the same thing and decided to make $\Delta$ cost sensitive (they did lots of other things related to dynamic programming). The result is that the very same loss function as above now allows for misclassification-dependent confidence margins. Obviously this is something that we would also want in our machine learning arsenal. Enter `MaxMargin`. By default it uses the 0-1 loss above (and it automagically infers the size), but if you provide it with a suitable matrix `delta`, it will use the latter. ###Code # plain vanilla loss loss = gluon.loss.MaxMargin() # some classes (4 class problem) label = nd.array([1,3,2]) output = nd.random_normal(shape=(3,4)) print('Function values for 3 problems {}'.format(output)) theloss = loss(output, label) print('Loss function values {}'.format(theloss)) print('Instantiated loss matrix {}'.format(loss._delta)) # now make things more interesting by changing the loss matrix delta = nd.array(loss._delta) #call copy constructor delta[0,3] = 4 delta[1,3] = 4 delta[2,3] = 4 loss = gluon.loss.MaxMargin(delta) print('Instantiated loss matrix {}'.format(loss._delta)) print('Function values for 3 problems {}'.format(output)) theloss = loss(output, label) print('Loss function values {}'.format(theloss)) ###Output Function values for 3 problems [[ 0.84843999 0.85705417 -0.28716376 -0.65270543] [-0.56867689 -0.35533145 0.86864537 0.07883889] [ 0.50960332 0.80499649 -0.44336858 2.44341731]] <NDArray 3x4 @cpu(0)> Loss function values [ 0.99138576 1.78980649 3.88678598] <NDArray 3 @cpu(0)> Instantiated loss matrix [[ 0. 1. 1. 1.] [ 1. 0. 1. 1.] [ 1. 1. 0. 1.] [ 1. 1. 1. 0.]] <NDArray 4x4 @cpu(0)> Instantiated loss matrix [[ 0. 1. 1. 4.] [ 1. 0. 1. 4.] [ 1. 1. 0. 4.] [ 1. 1. 1. 0.]] <NDArray 4x4 @cpu(0)> Function values for 3 problems [[ 0.84843999 0.85705417 -0.28716376 -0.65270543] [-0.56867689 -0.35533145 0.86864537 0.07883889] [ 0.50960332 0.80499649 -0.44336858 2.44341731]] <NDArray 3x4 @cpu(0)> Loss function values [ 2.49024034 1.78980649 6.88678598] <NDArray 3 @cpu(0)> ###Markdown Information Theory Primer EntropySometimes we care about probabilities rather than just labels. In particular, we might want to measure the distance between distributions. For that we need some basics about probabilities, such as the [entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)). In a nutshell, the entropy of a random variable is the amount of surprise we encounter each time we sample from it. For instance, the entropy of the constant function is zero, since we already know what's coming. The entropy of a fair coin being tossed is 1 bit. We have no idea what's going to happen (it's a fair coin after all) and there are only two possible outcomes. If we had a biased coin, e.g. one that produces heads with probability 0.9 and tails with probability 0.1, the surprise would be less (after all, most of the time we see a head). Correspondingly its entropy should be lower. On the other hand, a dice with 6 possible outcomes should have a higher degree of surprise. Without further ado, let's define the entropy function:$$H[p] := \sum_x -p(x) \log p(x)$$This works well for discrete outcomes. For densities we use$$H[p] := \int -p(x) \log p(x) dx$$We can check that for a fair coin the entropy is given by $H[p] = -2 \cdot 0.5 \log 0.5 = \log 2$. Information theorists often measure the information in 'nats' rather than bits. The difference is the base of the logarithm. It's easy to convert: 1 nat is $\log_2 e \approx 1.44$ bit. More generally, for a uniform distribution over $N$ outcomes it is $H[p] = \log N$. One of the fundamental theorems in information theory is that for a distribution $p$, we need at least $H[p]$ nats to encode it. There are a number of useful properties for the entropy that are employed in machine learning: * Often when estimating distributions we want to find the one with the largest entropy that fits the requirements. That's in line with our desire to restrict our estimates as little as possible beyond what we actually observe.* The entropy is a concave function. That is, for two distributions $p$ and $q$, the mixture of both has higher entropy: $H[\lambda p + (1-\lambda) q] \geq \lambda H[p] + (1-\lambda) H[q]$. To prove this, simply note that the function $-x \log x$ is concave. * When we have independent random variables, say $x$ and $y$, then the entropy of the joint distribution is the sum of the individual entropies. This follows simply from the fact that $\log p(x) q(y) = \log p(x) + \log q(y)$. * For dependent random variables the joint entropy is lower than that of the individual terms. This can be seen as follows:$$\begin{eqnarray}H[p(x,y)] = & \int -p(x,y) \log p(x,y) \\= & \int -p(x,y) [\log p(x) p(y)] dx dy + \int p(x,y) \log \frac{p(x) p(y)}{p(x,y)} dx dy \\\leq & H[p(x)] + H[p(y)] + \log \int p(x,y) \frac{p(x) p(y)}{p(x,y)} dx dy \\= & H[p(x)] + H[p(y)]\end{eqnarray}$$Here the inequality follows from the fact that $\log x$ is a concave function, hence the expectation of the logarithm is less than the logarithm of the expectation. Intuitively this result is straightforward - if $x$ and $y$ are dependent on each other, then knowing $x$ should tell us some more about $y$. Therefore, the joint entropy of $x$ and $y$ should be lower than the sum of the individual entropies. This leads us to the notion of mutual information. It is given by the difference between joint and and independent entropies, i.e. $I(x,y) := H[p(x)] + H[p(y)] - H[p(x,y)]$. Basically it's the amount of information that we save. For instance, a light switch and a (functioning) light bulb are strongly correlated - knowing one tells us all about the other. The entropy of the joint is 1 bit (if it's on with probability 0.5), but the sum of the entropies of switch and bulb individually is 2 bit. Hence the mutual information is 1 bit. Kullback Leibler DivergenceThis brings us to the KL divergence. It measures how close two distributions are. One way of defining such a quantity is to ask how many extra bits one would have to spend to encode data drawn from $p$ when using a code tuned for $q$. If we assume for a fact that it takes $-\log p(x)$ nat to optimally encode $x$, then the penalty from using the 'wrong' code is given by $$D(p\|q) = \sum_x p(x) [\log p(x) - \log q(x)]$$For densities the quantity is defined analogously, i.e. $\int p(x) [\log p(x) - \log q(x)] dx$. The first thing to prove is that this is actually a distance. For that we need to show that $D(p\|q) \geq 0$ with equality only for $p=q$. To see the latter, simply plug $p=q$ into the definition. To see the former, we rewrite $D$ in the same way as above, using convexity, this time of $-\log x$. $$D(p\|q) = \sum_x -p(x) \log \frac{q(x)}{p(x)} \geq -\log \sum_x p(x) \frac{q(x)}{p(x)} = 0$$As an aside, to see that $H[p]$ can be achieved, indeed, quantize all $p(x)$ into bins of the next largest fraction of $2$, e.g. $0.2$ goes into the bin of $\frac{1}{4}$. It is clear that the sum over those bins is no smaller than $1$ and no larger than $2$. Moreover, we can arrange them into a tree, where at level $l$ the bins are of size $2^{1-l}$. Then we simply index these bins according to their position of the tree. Each $x$ will require $\lceil \log_2 p(x) \rceil$ bits (whatever is left over, we simply discard). In sum this is no more than $\log_2 H[p] + 1$. To tighten the bound, simply send $N$ symbols. Since they can be encoded using at most $N \log_2 H[p] + 1$ bit, the code becomes increasingly efficient with only $1/N$ waste. This proves that such a code can be found. That it's impossible to do any better is a consequence of $D(p\|q) \geq 0$. Note that our construction relied on very long codes for efficiency. This is a real problem in practice. [Turbo codes (https://en.wikipedia.org/wiki/Turbo_code) are one of the techniques to address this, e.g. for mobile communications. After this long detour, let's finally get to the KL divergence as a loss function. It generalizes the multiclass softmax as follows: instead of having just a single possible true class, it uses a probability distribution as reference. That is$$\log(f,y) = \sum_i y_i (log(y_i) - f_i)$$Here $f_i$ is assume to be a probability distribution (or we can set a flag to transform the output into one beforehand). ###Code loss = gluon.loss.KLDivLoss() # generate some random probability distribution f = nd.random_normal(shape=(1,10)) p = nd.exp(f) p = p / nd.sum(p) # generate some target distribution y = nd.random_normal(shape=(1,10)) y = nd.exp(y) y = y / nd.sum(y) z = nd.zeros_like(y) z[0,3] = 1 # distance between our estimate p and the 'true' distribution y print(loss(nd.log(p), y)) # distance to itself - should be zero print(loss(nd.log(p), p)) # equivalent of logistic loss with class 3 up to normalization over domain, i.e. 1/10 # note that this is VERY DIFFERENT from information theory but a traditional choice # in deep learning print(loss(nd.log(p), z)) print(-nd.log(p[0,3])) ###Output [ 0.05661784] <NDArray 1 @cpu(0)> [ 1.39972247e-08] <NDArray 1 @cpu(0)> [ 0.22255933] <NDArray 1 @cpu(0)> [ 2.22559333] <NDArray 1 @cpu(0)> ###Markdown KL Divergence EstimatorLoss functions can also be used for other purposes, such as estimating interesting properties about a distribution. For instance, we might want to *estimate* the KL divergence between two distributions directly. Unfortunately this is difficult, since it requires density estimation and even the ratio between two densities $p(x)/q(x)$ to begin with. A rather neat trick was suggested by [Nguyen, Wainwright and Jordan](http://dept.stat.lsa.umich.edu/~xuanlong/Papers/Nguyen-Wainwright-Jordan-aos09.pdf) (NWJ) in 2009 when they realized that convex duality can be used to estimate such quantities rather directly. Before we dive in, we briefly need to explain what the Fenchel-Legendre dual of a function is:$$F^*(z) = \mathrm{sup}_x x^\top z - F(x)$$$f^*$ basically compares a line with slope $z$ to the function $F$ and measures the smallest distance to that line. It has the neat property that its dual is the function itself, i.e. $F^{**} = F$, provided that the function is convex and well-behaved. NWJ used this to derive estimators of the [F-divergence](https://en.wikipedia.org/wiki/F-divergence) between distributions. The latter is defined as $$D_F(p\|q) := \int dq(x) F\left(\frac{p(x)}{q(x)}\right)$$Plugging in duality, we can rewrite this as an optimization problem in terms of $F^*$ (remember, the dual of the dual is the original function, at least for well-behaved convex ones). That is, we obtain$$\begin{eqnarray}D_F(p\|q) = & \int dq(x) \sup_G \left[\frac{p(x)}{q(x)} G(x) - F^*(G(x))\right] \\=& \sup_G \left[\int dp(x) G(x) - \int dq(x) F^*(G(x))\right]\end{eqnarray}$$Skipping over details of when and whether this is possible in general, we now have the difference in expectations over two different function - $G(x)$ and $F^*(G(x))$ for two different distributions, namely $p$ and $q$. These can be replaced by empirical estimates (aka sample averages) and it now looks very much like a classification problem, albeit with a weird kind of loss function. In particular, the KL-divergence has $F(x) = x \log x$. After quite some algebra, and a substitution due to the fact that $G$ needs to be nonnegative, which lets us set $G(x) = \exp(g(x))$ we arrive at the following problem:$$D_F(p\|q) = \sup_g \left[\int dp(x) \exp(g(x)) - \int dq(x) (g(x) + 1)\right]$$This looks just like a classification problem with a weird loss function and with $p$ and $q$ substituted for classes $-1$ and $1$. The 'loss function' `DualKL` in `Gluon` provides this functionality. ###Code # we broke the output data previously output = nd.arange(-5,5,0.01) output.attach_grad() # we need the gradient loss = gluon.loss.DualKL() lossp = loss(output, -nd.ones_like(output)) lossq = loss(output, nd.ones_like(output)) plt.plot(output.asnumpy(), lossp.asnumpy(), label='Loss for p') plt.plot(output.asnumpy(), lossq.asnumpy(), label='Loss for q') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Relative Novelty DetectionIn some cases estimating a density or even a ratio of densities is not really what we want. Instead, we would like to find the *most typical* or the *most unusual* observation in a dataset. Unfortunately, these things are not really well defined. Before going into measure theory, we need some culinary support. |![](img/doughnut.jpg)|![](img/berliner.jpg)||:---------------:|:---------------:||Doughnut|Jelly Doughnut|Now imagine that we have two different pastry-shaped distributions - a doughnut shaped one and one that looks like a jelly doughnut (also called 'Berliner' in Europe). These two couldn't be more different from each other. Any data occurring in the donut hole (or far away in its periphery) is novel, whereas for the jelly doughnut only the data far away is novel. Yet we can transform one into the other, simply by messing with the radius in polar coordinates, e.g. via a new radial coordinate $r' = 1/r$. Hence, what once was novel is now no longer novel since we stretched out the center of the poor jelly doughnut so much that its density becomes infinitesimally low. In mathematics terms, this means that novelty is sensitive to the *measure* of the domain where it's defined. This is bad, since we usually don't know this measure. For a 3D space, there are still assumptions of what is considered reasonable (stretching out a poor jelly doughnut probably is not). But for arbitrary domains (database records, books, images, movies, TCP/IP logs) it's pretty hard to define what is reasonable. However, we all know that something that looks just like what we've seen before is probably reasonable ... but that's just like defining what is novely by saying that it's novel if it looks novel. Ouch!Here's a mathematically more sound way: we use data to define an implicit reference measure. E.g. for server logs we could use past data as a reference measure, such that we can ask the question whether something looks out of order relative to what we've seen in the past. Or for images, whether there's one that stands out relative to past images. Or even for pixels within an image. Mathematically this means that we care mostly about $p(x)/q(x)$ whenever $p(x)/q(x)$ is particularly small. For large ratios things are just fine. This is precisely what [Smola, Le and Teo](http://proceedings.mlr.press/v5/smola09a/smola09a.pdf) (SLT) in 2009 did in their Relative Novelty Detection paper. They used the same reasoning as NGW but with a different F-divergence function: $$F\left(\frac{p(x)}{q(x)}\right) = \mathrm{max}\left(0, \rho - \log \frac{p(x)}{q(x)}\right)$$Here $\rho$ serves as a threshold to decide whether the density ratio is too low. Anything lower than $\exp(\rho)$ is too small. This actually allows us to focus on both the very typical and the very atypical aspects of the data, simply by picking a very small and a very large $\rho$ respectively. Note that for very large $\rho$ this is just the *reverse KL Divergence*, i.e. pretty much the same thing as what NGW were using. Again, skipping over the tedious mathematical details of computing the dual of $F$ and of substituting (we have the same problem of nonnegativity) we arrive at the following loss function:$$l(y,f) = \begin{cases} \exp(f - \rho) & \text{ if } y = -1 \\ -f-1 & \text{ if } y = 1 \text{ and } f > 0 \\ \exp(f) & \text{ if } y = 1 \text{ and } f <= 0\end{cases}$$'Training' with this loss function will give us precisely the relative novelty detector that we want. All we need to do now is threshold it at $\rho$ to get the desired output. ###Code loss = gluon.loss.RelativeNovelty(rho=3) lossp = loss(output, -nd.ones_like(output)) lossq = loss(output, nd.ones_like(output)) plt.plot(output.asnumpy(), lossp.asnumpy(), label='Loss for p') plt.plot(output.asnumpy(), lossq.asnumpy(), label='Loss for q') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Exotic LossesThere are many loss functions that do not fit into the categories of classification and regression. In fact, there are some recent papers that argue that we should do away with loss functions entirely, such as the one by [Isola, Zhu, Zhou and Efros](https://arxiv.org/abs/1611.07004) from 2016. That said, there are quite a few useful loss functions that are in use. Triplet LossAssume that we want to embed data into a vector space. For instance, assume that we want to find embeddings of faces such that faces of the same person are grouped closely together whereas faces of different people are distant. In math: we want $\|f_a - f_{a'}\|$ to be small for $a$ and $a'$ drawn from the same class, whereas we want $\|f_a - f_b\|$ to be large whenever $a$ and $b$ are from different classes. However, this doesn't really tell us *how small* and *how large* we'd really like these distances to be. There is an easy fix - all we need to do is to enforce that the distances are *relatively large*, namely by some constant $c > 0$. $$\|f_a - f_{a'}\| + c \leq \|f_a - f_b\|$$Now we can use the same trick as for soft-margin losses and turn this into a loss function by taking the maximum over the inequality. One last trick is to square the distances such that gradients look nice and we have the triplet loss:$$l(f_a, f_{a'}, f_b) = \mathrm{max}(0, c + \|f_a - f_{a'}\|^2 - \|f_a - f_b\|^2)$$Quite unsurprisingly, this is invoked via the `TripletLoss` class. Its constructor lets us adjust the margin $c$ by which data should be separated. Let's generate some data. ###Code loss = gluon.loss.TripletLoss(margin=2) # make some data. f1 and f2 are similar, f3 is (hopefully) far away theshape = (5,3) f1 = nd.normal(shape=theshape) f2 = nd.normal(shape=theshape) f3 = nd.normal(shape=theshape) * 5.0 # with the right pair of distances theloss = loss(f1, f2, f3) print(theloss) # these are likely far away in the wrong way, since we blew f3 out of proportions theloss = loss(f1, f3, f2) print(theloss) ###Output [ 6.16215515 0. 0. 0. 0. ] <NDArray 5 @cpu(0)> [ 0. 36.57266998 51.61498642 9.96375847 32.30460358] <NDArray 5 @cpu(0)>
Exercise 4 - LDA.ipynb
###Markdown 1. measure linear separabilityhttps://www.python-course.eu/linear_discriminant_analysis.phpGiven mxn dimensional input $X_{mn}$, and expecting 1xn dimensional output $y_{1n}$, the Fishers Linear Discriminant Analysis (LDA) searches for the **linear** projection parameterized by $w_{m1}$, noted as $y_{1n} = w^T_{m1} * X_{mn}$, where the **separability** of the classes is maximized. The **separability** of the classes means that, the samples of a class should have the predictions closer to ground truth class, than to other classes.********Consider the optimization as **least square regresssion problem** from $X_{mn}$ to output $y_{1n}$, the regression loss is:$\begin{align}(1) Loss_w &= \sum_{c\in C} SE_c\\ & = \sum_{c\in C} \sum_{j\in N_c} [y({x_j}) - y({u_c})]^2 \\ & = \sum_{c\in C} \sum_{j\in N_c} (w^T * x_j - w^T * u_c)(w^T * x_j - w^T * u_c)^T\\ & = \sum_{c\in C} \sum_{j\in N_c} w^T*(x_j - u_c)(x_j - u_c)^T * w\\ & = \sum_{c\in C}w^T * [\sum_{j\in N_c} (x_j - u_c)(x_j - u_c)^T] * w\\ & = w^T * S_W * w \\\end{align}$where $S_W$ is the within class scatter matrix, denoted as:$\begin{align}S_W = \sum_{c \in C}\sum_{j \in N_{c}} (x_j - u_c)(x_j - u_c)^T\\\end{align}$Given that we have calculated the the scatter matrix , the computation of covariance matrix is straight forward. We just a have to scale by n-1 the scatter matrix values to compute the covariance matrix, which means:$\begin{align}Cov(X) &= \frac{\sum_{i\in N}\sum_{j\in N}(X_i - u)(X_j - u)^T}{N-1}\\ &= \frac{\sum_{i\in N}\sum_{S_X}}{N-1}\\S_X &= (N - 1) * Cov(X)\\\end{align}$$Loss_w$ represents how much the predictions deviate from the ground truth across all samples, noted as **Within Group Loss**. This is important information, but not enough, as **separatability** should be a notion reflecting the **contrast** between the confidence of an instance belonging to a class, and the confidence of belonging to other classes. $Loss_w$ measures how close are the predictions and ground truth labels, but it does not tell how far the predictions are away from the wrong labels (away from the faults). There should be a loss term measuring the scatter between different classes in the transformed space. Again, using the square error, the scatter between two classes a, b can be expressed as:$\begin{align}SE_{a,b \in C} & = N_{a} * N_{b} * [(y(u_{a}) - y(u_{b})]^2 \\& = N_a * N_b * (w^T * u_a - w^T * u_b)(w^T * u_ia - w^T * u_b)^T\\& = N_a * N_b * w^T*[(u_a - u_b)(u_a - u_b)^T] * w\\\end{align}$When summing up all pairs, the overal becomes:$\begin{align}(2) Loss_b &= \sum^{a \neq b} SE_{a,b}\\ &= w^T*\sum_{}^{a \neq b} N_a * N_b * [(u_a - u_b)(u_a - u_b)^T] * w\\ &= w^T*\sum_{a,b}^{a \neq b} N_a *N_b * [(u_a - u_b)(u_a - u + u - u_b)^T] * w \\ &= w^T*\sum_{a,b}^{a \neq b} N_a *N_b * [(u_a - u_b)(u_a - u)^T * w + w^T*\sum_{a,b}^{a \neq b} N_a *N_b * [(u_a - u_b)(u - u_b)^T] * w \\ &= w^T*\sum_{a,b}^{a \neq b} N_a *N_b * [(u_a - u_b)(u_a - u)^T * w + w^T*\sum_{b,a}^{b \neq a} N_b *N_a * [(u_b - u_a)(u_b - u)^T] * w \\ &= 2 * w^T*\sum_{a,b}^{a \neq b} N_a *N_b * [(u_a - u_b)(u_a - u)^T * w \\ &= 2 * w^T*\sum_{a}N_a*\sum_{b \neq a} N_b * [(u_a - u_b)(u_a - u)^T * w \\ &= 2 * w^T*\sum_{a}N_a*[\sum_{b \neq a} N_b * u_a - \sum_{b \neq a} N_b * u_b)]*(u_a - u)^T * w\\ &= 2 * w^T*\sum_{a}N_a*[(N - N_a) * u_a - (N*u - N_a*u_a)]*(u_a - u)^T * w\\ &= 2 * w^T*\sum_{a}N_a*[(N * u_a - N*u]*(u_a - u)^T * w\\ &= 2 * w^T*\sum_{a}N * N_a*[(u_a - u]*(u_a - u)^T * w \\ &= 2 * N * w^T*\sum_{c}N_c*[(u_c - u]*(u_c - u)^T * w \\ &= 2 * N * w^T* S_B * w \\ \end{align}$where $S_B$ is the between class scatter matrix, denoted as:$\begin{align}S_B = \sum_{c \in C} N_c (u_c - u)(u_c - u)^T\\\end{align}$Interestingly, $SB$ was initially defined as weighted sum of pairwised outerproduct of the class mean vectors in the transformed space, in the end, it's equilevant to calculate the weighted sum of the outer product of each class mean and the global mean in the transformed space.Moreover, when summing up $S_W, S_B$, we get $S_T$ which captures the overal scatter of the samples:$\begin{align}(3) S_T &= \sum_{x \in X} (x - u)(x - u)^T \\&= \sum_{c \in C}\sum_{ j \in N_c} [(x_j - u_c) + (u_c - u)][(x_j - u_c) + (u_c - u)]^T \\&= \sum_{c \in C}\sum_{ j \in N_c} (x_j - u_c)(x_j - u_c)^T + \sum_{c \in C}\sum_{ j \in N_c} (u_c - u)(u_c - u)^T + \sum_{c \in C}\sum_{ j \in N_c} (x_j - u_c)(u_c - u)^T + \sum_{c \in C}\sum_{ j \in N_c} (u_c - u)(x_j - u_c)^T \\&= \sum_{c \in C}\sum_{ j \in N_c} (x_j - u_c)(x_j - u_c)^T + \sum_{c \in C} N_c(u_c - u)(u_c - u)^T +\sum_{c \in C}(\sum_{ j \in N_c} x_j - N_c * u_c)(u_c - u)^T + \sum_{c \in C}(u_c - u) (\sum_{ j \in N_c}x_j - N_c * u_c)^T \\&= \sum_{c \in C}\sum_{ j \in N_c} (x_j - u_c)(x_j - u_c)^T + \sum_{c \in C} N_c(u_c - u)(u_c - u)^T + \sum_{c}(0)(u_x - u)^T + \sum_{c}(u_x - u)(0) \\ &= \sum_{c \in C}\sum_{ j \in N_c} (x_j - u_c)(x_j - u_c)^T + \sum_{c \in C} N_c(u_c - u)(u_c - u)^T + 0 + 0 \\&= S_W + S_B +0 +0 \\&= S_W + S_B\end{align}$As scatter matrix captures the variance/covariance, it represents a notion of energy. We can think that $S_T$ captures the overal energy in the distribution, which can be split into two parts: the $S_W$ which captures the 'harmful' energy which enlarges the distances between samples in same class, and $S_B$ captures the 'useful' energy between classes which enlarges the distances between samples of different classes. 2. optimize linear separabilityTo increase the linear separability, we are looking for small $Loss_w$ and large $Loss_b$. So we can form the loss function as:$\begin{align}(4) J_w & = \frac{Loss_b}{Loss_w}\\ & = \frac{w^T * S_B * w}{w^T * S_W * w}\\ \end{align}$Do the derivative and make it zero:$\begin{align}(5) J^{'}_w & = \frac{D(J_w)}{D_w} = 0\\ & => (w^T * S_W * w)* 2 * S_B * w - (w^T * S_B * w) * 2 * S_W * w = 0\\ & => \frac{(w^T * S_W * w)* S_B * w}{(w^T * S_W * w)} - \frac{(w^T * S_B * w) * S_W * w}{(w^T * S_W * w)}= 0\\ & => S_B * w - \frac{(w^T * S_B * w)}{(w^T * S_W * w)} * S_W * w= 0\\ & => S_B * w - J_w * S_W * w= 0\\ & => (S_B - J_w * S_W) * w= 0\\ & => S^{-1}_W*(S_B - J_w * S_W) * w= 0\\ & => S^{-1}_W*S_B *w - J_w * w = 0\\ & => S^{-1}_W*S_B *w = \lambda * w\\\end{align}$Now we see that the optimal w is an eigen-vector of $S^{-1}_W*S_B$, corresponding to the largest eigen-value. Note that here w represents a normalized vector where $||w||_2 = 1$. When perform multi-class LDA, we would extract the first $N_c-1$ eigen-vectors to form the overal transformation. As these eigen-vectors are orthogonal to each other, they form the axis bases of the transformed space. This combination makes $\sum_{i \in N_c-1}J_{wi}$ largest in the transformation solution space composed by all ${w:||w||_2 = 1}$.There is another way using Lagrangian form of the problem:The problem of maximizing $J_w$ is equilevant to maximizing $Loss_b = w^T*S_B*w$ when keeping $Loss_w = w^T*S_W*w = K$, where K is constant.Then the lagrangian form is:$\begin{align}(6) L & = w^T * S_B * w - \lambda * (w^T * S_W * w - K)\\ \end{align}$Then make the derivative to $w$ equals to $0_{m1}$ (vector):$\begin{align}(6) \frac {\delta L}{\delta w} & = 2 * S_B * w - \lambda * 2 * S_W * w = 0_{m1}\\ & = S_B * w - \lambda * S_W * w = 0_{m1}\\ & = S_B * w = \lambda * S_W * w\\ & = S_W^{-1}*S_B * w = \lambda *w\\ \end{align}$ 3. Now implement 3.1 generate dataset ###Code #### Load dataset %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib import style style.use('fivethirtyeight') np.random.seed(seed=42) # Create data num_samples = 100 gap = 4 A = np.random.randn(num_samples) + gap, np.random.randn(num_samples) B = np.random.randn(num_samples), np.random.randn(num_samples) + gap C = np.random.randn(num_samples) + gap, np.random.randn(num_samples) + gap A = np.array(A) B = np.array(B) C = np.array(C) ABC = np.hstack([A, B, C]) y = np.array([0] * num_samples + [1] * num_samples+ [2] * num_samples) ## calculate the mean mean_A = A.mean(axis = 1, keepdims = True) mean_B = B.mean(axis = 1, keepdims = True) mean_C = C.mean(axis = 1, keepdims = True) mean_ABC = ABC.mean(axis = 1, keepdims = True) ## visualize fig = plt.figure(figsize=(10,5)) ax0 = fig.add_subplot(111) ax0.scatter(A[0],A[1],marker='s',c='r',edgecolor='black') ax0.scatter(B[0],B[1],marker='^',c='g',edgecolor='black') ax0.scatter(C[0],C[1],marker='o',c='b',edgecolor='black') ax0.scatter(mean_A[0],mean_A[1],marker='o', s = 100, c='y',edgecolor='black') ax0.scatter(mean_B[0],mean_B[1],marker='o', s = 100, c='y',edgecolor='black') ax0.scatter(mean_C[0],mean_C[1],marker='o', s = 100, c='y',edgecolor='black') ax0.scatter(mean_ABC[0],mean_ABC[1],marker='o', s = 200,c='y',edgecolor='red') plt.show() ## calculate the scatters scatter_A = np.dot(A-mean_A, np.transpose(A-mean_A)) scatter_B = np.dot(B-mean_B, np.transpose(B-mean_B)) scatter_C = np.dot(C-mean_C, np.transpose(C-mean_C)) scatter_ABC = np.dot(ABC-mean_ABC, np.transpose(ABC-mean_ABC)) ## see the equilevant of scatter matrix and covariance matrix print('@scatter matrix:\n',scatter_A) print('\n@covariance matrix to scatter matrix:\n', np.cov(A) *99) ## compute Sw, Sb Sw = scatter_A + scatter_B + scatter_C Sb = scatter_ABC - Sw ## computer eigen-values and eigen-vectors eigval, eigvec = np.linalg.eig(np.dot(np.linalg.inv(Sw),Sb)) ## get first 2 projections eigen_pairs = zip(eigval, eigvec) eigen_pairs = sorted(eigen_pairs,key=lambda k: k[0],reverse=True) w = eigvec[:2] ## transform Projected = ABC.T.dot(w).T ## plot transformed feature and means fig = plt.figure(figsize=(12, 8)) ax0 = fig.add_subplot(111) means = [] for l,c,m in zip(np.unique(y),['r','g','b'],['s','x','o']): means.append(np.mean(Projected[:,y==l],axis=1)) ax0.scatter(Projected[0][y==l], Projected[1][y==l], c=c, marker=m, label=l, edgecolors='black') ## make grid mesh_x, mesh_y = np.meshgrid(np.linspace(min(Projected[0]),max(Projected[0])), np.linspace(min(Projected[1]),max(Projected[1]))) mesh = [] for i in range(len(mesh_x)): for j in range(len(mesh_x[0])): mesh.append((mesh_x[i][j],mesh_y[i][j])) ## make decision on grid points from sklearn.neighbors import KNeighborsClassifier NN = KNeighborsClassifier(n_neighbors=1) NN.fit(means,['r','g','b']) predictions = NN.predict(np.array(mesh)) ## plot grid ax0.scatter(np.array(mesh)[:,0],np.array(mesh)[:,1],color=predictions,alpha=0.4) ## plot means means = np.array(means) ax0.scatter(means[:,0],means[:,1],marker='o',c='yellow', edgecolors='red', s=200) ax0.legend(loc='upper right') plt.show() ###Output _____no_output_____
lectures/notes/Lecture8-supervised-classification.ipynb
###Markdown Lecture 18: (Supervised) Classification This notebook was developed by [Zeljko Ivezic](http://faculty.washington.edu/ivezic/) for the 2021 data science class at the University of Sao Paulo and it is available from [github](https://github.com/ivezic/SaoPaulo2021/blob/main/notebooks/Lecture18.ipynb).Note: this notebook contains code developed by Z. Ivezic, M. Juric, A. Connolly, B. Sippocz, Jake VanderPlas, G. Richards and many others. Resources for this notebook include:- [Textbook](http://press.princeton.edu/titles/10159.html) Chapter 9. - code taken and modified from [astroML fig. 9.18](http://www.astroml.org/book_figures/chapter9/fig_star_quasar_ROC.html) This notebook includes: [Introduction to Supervised Classification](intro)- supervised vs. unsupervised classification- types of classifiers: generative vs. discriminative- classification loss- ROC curves [An example of a discriminative classifier: Support Vector Machine classifier](svm)[An example of a generative classifier: star/galaxy separation using Gaussian Naive Bayes classifier](GNBsg)[Comparison of many methods using ROC curves](roc) Introduction to Supervised Classification [Go to top](toc) In density estimation, we estimate joint probability distributions from multivariate data sets to identify the inherent clustering. This is essentially **unsupervised classification**. In other words, this method is search for unknown structure in your (multi-dimensional) dataset.If we have labels for some of these data points (e.g., an object is tall, short, red, or blue), we can develop a relationship between the label and the properties of a source. This is **supervised classification**. In other words, this method is finding objects in your (multi-dimensional) dataset that "look like" objects in your training set. Classification, regression, and density estimation are all related. For example, the regression function $\hat{y} = f(y|\vec{x})$ is the best estimated value of $y$ given a value of $\vec{x}$. In classification $y$ is categorical and $f(y|\vec{x})$ is called the _discriminant function_- Using density estimation for classification is referred to as _generative classification_ (we have a full model of the density for each class or we have a model which describes how data could be generated from each class). - Classification that finds the decision boundary that separates classes is called _discriminative classification_ Both have their place in astrophysical classification. Classification loss: how well are we doing?The first question we need to address is how we score (define the success of our classification).We can define a _loss function_. A zero-one loss function assigns a value of one for a misclassification and zero for a correct classification (i.e. we will want to minimize the loss).If $\hat{y}$ is the best guess value of $y$, the classification loss, $L(y,\widehat{y})$, is $$L(y,\widehat{y}) = \delta(y \neq \widehat{y})$$which means$\begin{eqnarray} L(y,\hat{y}) & = & \left\{ \begin{array}{cl} 1 & \mbox{if $y\neq\hat{y}$}, \\ 0 & \mbox{otherwise.} \end{array} \right. \end{eqnarray}$Note the obvious implication: the minumum of this loss function is zero and its maximum is the sample size. The expectation (mean) value of the loss $\mathbb{E} \left[ L(y,\hat{y}) \right] = p(y\neq \hat{y})$ is called the classification risk This is related to regression loss functions: $L(y, \hat{y}) = (y - \hat{y})^2$ and risk $\mathbb{E}[(y - \hat{y})^2]$.We can then define:> $ {\rm completeness} = {\rm true\ positive\ rate} = \frac{\rm true\ positives} {\rm true\ positives + false\ negatives}$> $ {\rm contamination} = {\rm false\ positive\ rate} = \frac{\rm false\ positives} {\rm true\ positives + false\ positives}.$ Types of classifiersThere are two basic types of classification methods: **generative** classification methods model the underlying density field (i.e. they relies on density estimation methods, such as Gaussian Mixture Model), and **discriminative** classification methods (e.g. Support Vector Machine) which focus on finding the decision boundary which separates classes directly. The former are easier to interpret, the latter often work better in high-D cases. ![](figures/genclass.png) ![](figures/discrimclass.png) Comparing the performance of classifiersBest performance is a bit of a subjective context-dependent topic (e.g. star-galaxy separation for correlation function studies vs. star-galaxy separation for Galactic streams studies). We trade contamination as a function of completeness and this is science dependent.**ROC curves: Receiver Operating Characteristic curves**- Plot the true-positive vs the false-positive rate- Initially used to analyze radar results in WWII (a very productive era for statistics...).- One concern about ROC curves is that they are sensitive to the relative sample sizes (if there are many more background events than source events small false positive results can dominate a signal). For these cases we we can plot efficiency (1 - contamination) vs completeness Which classification method to use? There is no general answer because the performance of a methoddepends on the properties of your dataset and what is the goalof your classification (a good workshop has many different typesof hammers, screwdrivers etc. in its toolbox!). We will illustrate here a number of methods and compare their performace on a hard dataset using ROC curves: - LinearDiscriminantAnalysis- QuadraticDiscriminantAnalysis- Gaussian Naive Bayes - Gaussian Mixture Model Bayes- K nearest neighbors (KNN) classifier- Decision Tree Classifier- Logistic RegressionThe "hard problem" is drawn from SDSS: selection of RR Lyrae starsusing single-epoch photometry. The problem is hard because it is imbalanced: there are vastly more non-RR Lyrae stars than RR Lyraestars with similar colors. For more astrophysical and other details, please see [this paper](http://faculty.washington.edu/ivezic/Publications/203458.web.pdf). **Note** Image classification using Convolutional Neural Networksis a rapidly developing topic! We cannot do it justice in this short course, but we will say a few words about it and show an example in our last lecture. ###Code import numpy as np from matplotlib import pyplot as plt from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import (LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis) from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC #from astroML.classification import GMMBayes from sklearn.metrics import roc_curve from astroML.utils import split_samples, completeness_contamination from astroML.datasets import fetch_rrlyrae_combined ###Output _____no_output_____ ###Markdown An example of a discriminative classifier: Support Vector Machine classifier [Go to top](toc) This is a rather general (multi-purpose) hammer! Support Vector Machines Find the hyperplane that maximizes the distance of the closest point from either class. This distance is the margin (width of the line before it hits a point). We want the line that maximizes the margin (m).The points on the margin are called _support vectors_If we assume $y \in \{-1,1\}$, (+1 is maximum margin, -1 is minimum, 0 is the decision boundary)The maximum is then just when $\beta_0 + \beta^T x_i = 1$ etcThe hyperplane which maximizes the margin is given by finding$$\max_{\beta_0,\beta}(m) \;\;\; \mbox{subject to} \;\;\; \frac{1}{||\beta||} y_i ( \beta_0 + \beta^T x_i ) \geq m \,\,\, \forall \, i.$$The constraints can be written as $y_i ( \beta_0 + \beta^T x_i ) \geq m ||\beta|| $. Thus the optimization problem is equivalent to minimizing$$\frac{1}{2} ||\beta|| \;\;\; \mbox{subject to} \;\;\; y_i ( \beta_0 + \beta^T x_i ) \geq 1 \,\,\, \forall \, i.$$This optimization is a _quadratic programming_ problem (quadratic objective function with linear constraints).Note that because SVM uses a metric which maximizes the margin rather than a measure over all points in the data sets, it is similar in spirit to the rank-based estimators - The median of a distribution is unaffected by even large perturbations of outlying points, as long as those perturbations do not cross the boundary.- In the same way, once the support vectors are determined, changes to the positions or numbers of points beyond the margin will not change the decision boundary. For this reason, SVM can be a very powerful tool for discriminative classification.- This is why there is a high completeness compared to the other methods: it does not matter that the background sources outnumber the RR Lyrae stars by a factor of $\sim$200 to 1. It simply determines the best boundary between the small RR Lyrae clump and the large background clump.- This completeness, however, comes at the cost of a relatively large contamination level.- SVM is not scale invariant so it often worth rescaling the data to [0,1] or to whiten it to have a mean of 0 and variance 1 (remember to do this to the test data as well!)- The data dont need to be separable (we can put a constraint in minimizing the number of "failures") DatasetWe will use the RR Lyrae dataset. We get the data here, and split it into training and testing sets,and then use the same sets for all the examples below. ###Code #---------------------------------------------------------------------- # get data and split into training & testing sets X, y = fetch_rrlyrae_combined() X = X[:, [1, 0, 2, 3]] # rearrange columns for better 1-color results (X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25], random_state=0) N_tot = len(y) N_st = np.sum(y == 0) N_rr = N_tot - N_st N_train = len(y_train) N_test = len(y_test) N_plot = 5000 + N_rr # SVM takes several minutes to run, and is order[N^2] # truncating the dataset can be useful for experimentation. #X_tr = X[::5] #y_tr = y[::5] #---------------------------------------------------------------------- # Fit Kernel SVM Ncolors = np.arange(1, X.shape[1] + 1) def compute_SVM(Ncolors): classifiers = [] predictions = [] for nc in Ncolors: # perform support vector classification clf = SVC(kernel='rbf', gamma=20.0, class_weight='balanced') clf.fit(X_train[:, :nc], y_train) y_pred = clf.predict(X_test[:, :nc]) classifiers.append(clf) predictions.append(y_pred) return classifiers, predictions classifiers, predictions = compute_SVM(Ncolors) completeness, contamination = completeness_contamination(predictions, y_test) print("completeness", completeness) print("contamination", contamination) #------------------------------------------------------------ # compute the decision boundary clf = classifiers[1] xlim = (0.7, 1.35) ylim = (-0.15, 0.4) xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 101), np.linspace(ylim[0], ylim[1], 101)) Z = clf.predict(np.c_[yy.ravel(), xx.ravel()]) Z = Z.reshape(xx.shape) # smooth the boundary from scipy.ndimage import gaussian_filter Z = gaussian_filter(Z, 2) #---------------------------------------------------------------------- # plot the results fig = plt.figure(figsize=(8, 4)) fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0, left=0.1, right=0.95, wspace=0.2) # left plot: data and decision boundary ax = fig.add_subplot(121) im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:], s=4, lw=0, cmap=plt.cm.Oranges, zorder=2) im.set_clim(-0.5, 1) ax.contour(xx, yy, Z, [0.5], colors='k') ax.set_xlim(xlim) ax.set_ylim(ylim) ax.set_xlabel('$u-g$') ax.set_ylabel('$g-r$') # plot completeness vs Ncolors ax = fig.add_subplot(222) ax.plot(Ncolors, completeness, 'o-k', ms=6) ax.xaxis.set_major_locator(plt.MultipleLocator(1)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.2)) ax.xaxis.set_major_formatter(plt.NullFormatter()) ax.set_ylabel('completeness') ax.set_xlim(0.5, 4.5) ax.set_ylim(-0.1, 1.1) ax.grid(True) ax = fig.add_subplot(224) ax.plot(Ncolors, contamination, 'o-k', ms=6) ax.xaxis.set_major_locator(plt.MultipleLocator(1)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.2)) ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i')) ax.set_xlabel('N colors') ax.set_ylabel('contamination') ax.set_xlim(0.5, 4.5) ax.set_ylim(-0.1, 1.1) ax.grid(True) plt.show() ###Output completeness [1. 1. 1. 1.] contamination [0.90108303 0.83901293 0.83573141 0.81561238] ###Markdown Gaussian Naive Bayes [Go to top](toc)In Gaussian naive Bayes $p_k(x^i)$ are modeled as one-dimensional normal distributions, with means $\mu^i_k$ and widths $\sigma^i_k$. The naive Bayes estimator is then$$\hat{y} = \arg\max_{y_k}\left[\ln \pi_k - \frac{1}{2}\sum_{i=1}^N\left(2\pi(\sigma^i_k)^2 + \frac{(x^i - \mu^i_k)^2}{(\sigma^i_k)^2} \right) \right]$$ Note: this is the log of the Bayes criterion with no normalization constant This classifier is easy to implement and very robust. However, it works well only when the distributions are aligned with coordinate axes (that is, when "measurement types" are quite unrelated to each other, such as brightness and size in this example). For a deeper discussion of this problem, see [this paper on star-galaxy separation](http://faculty.washington.edu/ivezic/Publications/Slater_2020_AJ_159_65.pdf). ###Code from astroML.datasets import fetch_imaging_sample def get_stars_and_galaxies(Nstars=10000, Ngals=10000): """Get the subset of star/galaxy data to plot""" data = fetch_imaging_sample() objtype = data['type'] stars = data[objtype == 6][:Nstars] galaxies = data[objtype == 3][:Ngals] return np.concatenate([stars,galaxies]), np.concatenate([np.zeros(len(stars)), np.ones(len(galaxies))]) data, y = get_stars_and_galaxies(Nstars=10000, Ngals=10000) # select r model mag and psf - model mag as columns X = np.column_stack((data['rRaw'], data['rRawPSF'] - data['rRaw'])) #------------------------------------------------------------ # Fit the Naive Bayes classifier clf = GaussianNB() clf.fit(X, y) # predict the classification probabilities on a grid xlim = (15, 25) ylim = (-5, 5) xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 71), np.linspace(ylim[0], ylim[1], 81)) Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()]) Z = Z[:, 1].reshape(xx.shape) #------------------------------------------------------------ # Plot the results fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111) ax.scatter(X[:, 0], X[:, 1], c=y, zorder=2, alpha=0.5) ax.contour(xx, yy, Z, [0.5], linewidths=2., colors='blue') ax.set_xlim(xlim) ax.set_ylim(ylim) ax.set_xlabel('$x$') ax.set_ylabel('$y$') plt.show() ###Output _____no_output_____ ###Markdown Here is a generalization of Gaussian Naive Bayes Classifier to Gaussian Mixture Bayes Classifier ###Code from sklearn.mixture import GaussianMixture class GMMBayes(GaussianNB): """GaussianMixture Bayes Classifier This is a generalization to the Naive Bayes classifier: rather than modeling the distribution of each class with axis-aligned gaussians, GMMBayes models the distribution of each class with mixtures of gaussians. This can lead to better classification in some cases. Parameters ---------- n_components : int or list number of components to use in the GaussianMixture. If specified as a list, it must match the number of class labels. Default is 1. **kwargs : dict, optional other keywords are passed directly to GaussianMixture """ def __init__(self, n_components=1, **kwargs): self.n_components = np.atleast_1d(n_components) self.kwargs = kwargs def fit(self, X, y): X = np.asarray(X) y = np.asarray(y) n_samples, n_features = X.shape if n_samples != y.shape[0]: raise ValueError("X and y have incompatible shapes") self.classes_ = np.unique(y) self.classes_.sort() unique_y = self.classes_ n_classes = unique_y.shape[0] if self.n_components.size not in (1, len(unique_y)): raise ValueError("n_components must be compatible with " "the number of classes") self.gmms_ = [None for i in range(n_classes)] self.class_prior_ = np.zeros(n_classes) n_comp = np.zeros(len(self.classes_), dtype=int) + self.n_components for i, y_i in enumerate(unique_y): if n_comp[i] > X[y == y_i].shape[0]: warnstr = ("Expected n_samples >= n_components but got " "n_samples={0}, n_components={1}, " "n_components set to {0}.") warnings.warn(warnstr.format(X[y == y_i].shape[0], n_comp[i])) n_comp[i] = y_i self.gmms_[i] = GaussianMixture(n_comp[i], **self.kwargs).fit(X[y == y_i]) self.class_prior_[i] = np.float(np.sum(y == y_i)) / n_samples return self def _joint_log_likelihood(self, X): X = np.asarray(np.atleast_2d(X)) logprobs = np.array([g.score_samples(X) for g in self.gmms_]).T return logprobs + np.log(self.class_prior_) ###Output _____no_output_____ ###Markdown Comparison of many methods using ROC curves [Go to top](toc) ###Code #---------------------------------------------------------------------- # get data and split into training & testing sets X, y = fetch_rrlyrae_combined() X = X[:, [1, 0, 2, 3]] # rearrange columns for better 1-color results (X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25], random_state=0) N_tot = len(y) N_st = np.sum(y == 0) N_rr = N_tot - N_st N_train = len(y_train) N_test = len(y_test) N_plot = 5000 + N_rr #------------------------------------------------------------ # Fit all the models to the training data def compute_models(*args): names = [] probs = [] for classifier, kwargs in args: print(classifier.__name__) clf = classifier(**kwargs) clf.fit(X_train, y_train) y_probs = clf.predict_proba(X_test)[:, 1] names.append(classifier.__name__) probs.append(y_probs) return names, probs names, probs = compute_models((GaussianNB, {}), (LinearDiscriminantAnalysis, {}), (QuadraticDiscriminantAnalysis, {}), (LogisticRegression, dict(class_weight='balanced')), (KNeighborsClassifier, dict(n_neighbors=10)), (DecisionTreeClassifier, dict(random_state=0, max_depth=12, criterion='entropy')), (GMMBayes, dict(n_components=3, tol=1E-5, covariance_type='full'))) #------------------------------------------------------------ # Plot ROC curves and completeness/efficiency fig = plt.figure(figsize=(10, 5)) fig.subplots_adjust(left=0.1, right=0.95, bottom=0.15, top=0.9, wspace=0.25) # ax2 will show roc curves ax1 = plt.subplot(121) # ax1 will show completeness/efficiency ax2 = plt.subplot(122) labels = dict(GaussianNB='GNB', LinearDiscriminantAnalysis='LDA', QuadraticDiscriminantAnalysis='QDA', KNeighborsClassifier='KNN', DecisionTreeClassifier='DT', GMMBayes='GMMB', LogisticRegression='LR') thresholds = np.linspace(0, 1, 1001)[:-1] # iterate through and show results for name, y_prob in zip(names, probs): fpr, tpr, thresh = roc_curve(y_test, y_prob) # add (0, 0) as first point fpr = np.concatenate([[0], fpr]) tpr = np.concatenate([[0], tpr]) ax1.plot(fpr, tpr, label=labels[name]) comp = np.zeros_like(thresholds) cont = np.zeros_like(thresholds) for i, t in enumerate(thresholds): y_pred = (y_prob >= t) comp[i], cont[i] = completeness_contamination(y_pred, y_test) ax2.plot(1 - cont, comp, label=labels[name]) ax1.set_xlim(0, 0.04) ax1.set_ylim(0, 1.02) ax1.xaxis.set_major_locator(plt.MaxNLocator(5)) ax1.set_xlabel('false positive rate') ax1.set_ylabel('true positive rate') ax1.legend(loc=4) ax2.set_xlabel('efficiency') ax2.set_ylabel('completeness') ax2.set_xlim(0, 1.0) ax2.set_ylim(0.2, 1.02) plt.show() ###Output GaussianNB LinearDiscriminantAnalysis QuadraticDiscriminantAnalysis LogisticRegression KNeighborsClassifier DecisionTreeClassifier GMMBayes ###Markdown Let's now say a few more words about these classification methods: what exactly do they do? Linear and quadratic discriminant analysisLinear discriminant analysis (LDA) assumes the class distributions have identicalcovariances for all $k$ classes (all classes are a set of shifted Gaussians). Theoptimal classifier is derived from the log of the classposteriors $$g_k(\vec{x}) = \vec{x}^T \Sigma^{-1} \vec{\mu_k} - \frac{1}{2}\vec{\mu_k}^T \Sigma^{-1} \vec{\mu_k} + \log \pi_k,$$with $\vec{\mu_k}$ the mean of class $k$ and $\Sigma$ the covariance of theGaussians. The class dependent covariances that would normally give rise to a quadratic dependence on$\vec{x}$ cancel out if they are assumed to be constant. The Bayes classifier is, therefore, linear with respect to $\vec{x}$.The discriminant boundary between classes is the line that minimizesthe overlap between Gaussians$$ g_k(\vec{x}) - g_\ell(\vec{x}) = \vec{x}^T \Sigma^{-1} (\mu_k-\mu_\ell) - \frac{1}{2}(\mu_k - \mu_\ell)^T \Sigma^{-1}(\mu_k -\mu_\ell) + \log (\frac{\pi_k}{\pi_\ell}) = 0. $$Relaxing the requirement that the covariances of theGaussians are constant, the discriminant functionbecomes quadratic in $x$:$$ g(\vec{x}) = -\frac{1}{2} \log | \Sigma_k | - \frac{1}{2}(\vec{x}-\mu_k)^T C^{-1}(\vec{x}-\mu_k) + \log \pi_k. $$This is sometimes known as _quadratic discriminant analysis_ (QDA) ###Code #---------------------------------------------------------------------- # perform LinearDiscriminantAnalysis classifiers = [] predictions = [] Ncolors = np.arange(1, X.shape[1] + 1) for nc in Ncolors: clf = LinearDiscriminantAnalysis() clf.fit(X_train[:, :nc], y_train) y_pred = clf.predict(X_test[:, :nc]) classifiers.append(clf) predictions.append(y_pred) completeness, contamination = completeness_contamination(predictions, y_test) print("completeness", completeness) print("contamination", contamination) # perform QuadraticDiscriminantAnalysis qclassifiers = [] qpredictions = [] for nc in Ncolors: qlf = QuadraticDiscriminantAnalysis() qlf.fit(X_train[:, :nc], y_train) qy_pred = qlf.predict(X_test[:, :nc]) qclassifiers.append(qlf) qpredictions.append(qy_pred) qpredictions = np.array(qpredictions) qcompleteness, qcontamination = completeness_contamination(qpredictions, y_test) print("completeness", qcompleteness) print("contamination", qcontamination) #------------------------------------------------------------ # Compute the decision boundary clf = classifiers[1] qlf = qclassifiers[1] xlim = (0.7, 1.35) ylim = (-0.15, 0.4) xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 71), np.linspace(ylim[0], ylim[1], 81)) Z = clf.predict_proba(np.c_[yy.ravel(), xx.ravel()]) Z = Z[:, 1].reshape(xx.shape) QZ = qlf.predict_proba(np.c_[yy.ravel(), xx.ravel()]) QZ = QZ[:, 1].reshape(xx.shape) #---------------------------------------------------------------------- # plot the results fig = plt.figure(figsize=(8, 4)) fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0, left=0.1, right=0.95, wspace=0.2) # left plot: data and decision boundary ax = fig.add_subplot(121) im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:], s=4, lw=0, cmap=plt.cm.Oranges, zorder=2) im.set_clim(-0.5, 1) im = ax.imshow(Z, origin='lower', aspect='auto', cmap=plt.cm.binary, zorder=1, extent=xlim + ylim) im.set_clim(0, 1.5) ax.contour(xx, yy, Z, [0.5], linewidths=2., colors='k') ax.set_xlim(xlim) ax.set_ylim(ylim) ax.set_xlabel('$u-g$') ax.set_ylabel('$g-r$') # right plot: qda ax = fig.add_subplot(122) im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:], s=4, lw=0, cmap=plt.cm.Oranges, zorder=2) im.set_clim(-0.5, 1) im = ax.imshow(QZ, origin='lower', aspect='auto', cmap=plt.cm.binary, zorder=1, extent=xlim + ylim) im.set_clim(0, 1.5) ax.contour(xx, yy, QZ, [0.5], linewidths=2., colors='k') ax.set_xlim(xlim) ax.set_ylim(ylim) ax.set_xlabel('$u-g$') ax.set_ylabel('$g-r$') plt.show() ###Output _____no_output_____ ###Markdown GMM and Bayes classificationThe natural extension to the Gaussian assumptions is to use GMM's to learn the density distribution. The number of Gaussian components $K$ must be chosen for each class independently ###Code # GMM-bayes takes several minutes to run, and is order[N^2] # where N is the sample size # truncating the dataset can be useful for experimentation. #X_tr = X[::10] #y_tr = y[::10] #---------------------------------------------------------------------- # perform GMM Bayes Ncolors = np.arange(1, X.shape[1] + 1) Ncomp = [1, 3] def compute_GMMbayes(Ncolors, Ncomp): classifiers = [] predictions = [] for ncm in Ncomp: classifiers.append([]) predictions.append([]) for nc in Ncolors: clf = GMMBayes(ncm, tol=1E-5, covariance_type='full') clf.fit(X_train[:, :nc], y_train) y_pred = clf.predict(X_test[:, :nc]) classifiers[-1].append(clf) predictions[-1].append(y_pred) return classifiers, predictions classifiers, predictions = compute_GMMbayes(Ncolors, Ncomp) completeness, contamination = completeness_contamination(predictions, y_test) print("completeness", completeness) print("contamination", contamination) #------------------------------------------------------------ # Compute the decision boundary clf = classifiers[1][1] xlim = (0.7, 1.35) ylim = (-0.15, 0.4) xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 71), np.linspace(ylim[0], ylim[1], 81)) Z = clf.predict_proba(np.c_[yy.ravel(), xx.ravel()]) Z = Z[:, 1].reshape(xx.shape) #---------------------------------------------------------------------- # plot the results fig = plt.figure(figsize=(8, 4)) fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0, left=0.1, right=0.95, wspace=0.2) # left plot: data and decision boundary ax = fig.add_subplot(121) im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:], s=4, lw=0, cmap=plt.cm.Oranges, zorder=2) im.set_clim(-0.5, 1) im = ax.imshow(Z, origin='lower', aspect='auto', cmap=plt.cm.binary, zorder=1, extent=xlim + ylim) im.set_clim(0, 1.5) ax.contour(xx, yy, Z, [0.5], colors='k') ax.set_xlim(xlim) ax.set_ylim(ylim) ax.set_xlabel('$u-g$') ax.set_ylabel('$g-r$') # plot completeness vs Ncolors ax = fig.add_subplot(222) ax.plot(Ncolors, completeness[0], '^--k', ms=6, label='N=%i' % Ncomp[0]) ax.plot(Ncolors, completeness[1], 'o-k', ms=6, label='N=%i' % Ncomp[1]) ax.xaxis.set_major_locator(plt.MultipleLocator(1)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.2)) ax.xaxis.set_major_formatter(plt.NullFormatter()) ax.set_ylabel('completeness') ax.set_xlim(0.5, 4.5) ax.set_ylim(-0.1, 1.1) ax.grid(True) # plot contamination vs Ncolors ax = fig.add_subplot(224) ax.plot(Ncolors, contamination[0], '^--k', ms=6, label='N=%i' % Ncomp[0]) ax.plot(Ncolors, contamination[1], 'o-k', ms=6, label='N=%i' % Ncomp[1]) ax.legend(loc='lower right', bbox_to_anchor=(1.0, 0.78)) ax.xaxis.set_major_locator(plt.MultipleLocator(1)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.2)) ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i')) ax.set_xlabel('N colors') ax.set_ylabel('contamination') ax.set_xlim(0.5, 4.5) ax.set_ylim(-0.1, 1.1) ax.grid(True) plt.show() ###Output completeness [[0.48175182 0.68613139 0.73722628 0.78832117] [0. 0.11678832 0.43065693 0.68613139]] contamination [[0.85201794 0.79249448 0.77605322 0.75675676] [0. 0.33333333 0.14492754 0.21666667]] ###Markdown You can see that 1-component classifier is pretty bad, while with 3 components, the performance significantly improves. K-nearest neighboursAs with density estimation (and kernel density estimation) the intuitive justification is that $p(y|x) \approx p(y|x')$ if $x'$ is very close to $x$. The number of neighbors, $K$, regulates the complexity of the classification. In simplest form, a majority rule classification is adopted, where each of the $K$ points votes on the classification. Increasing $K$ decreases the variance in the classification but at the expense of an increase in the bias. Weights can be assigned to individual votes by weighting the vote by the distance to the nearest point. ###Code #---------------------------------------------------------------------- # perform Classification classifiers = [] predictions = [] Ncolors = np.arange(1, X.shape[1] + 1) kvals = [1, 10] for k in kvals: classifiers.append([]) predictions.append([]) for nc in Ncolors: clf = KNeighborsClassifier(n_neighbors=k) clf.fit(X_train[:, :nc], y_train) y_pred = clf.predict(X_test[:, :nc]) classifiers[-1].append(clf) predictions[-1].append(y_pred) completeness, contamination = completeness_contamination(predictions, y_test) print("completeness", completeness) print("contamination", contamination) #------------------------------------------------------------ # Compute the decision boundary clf = classifiers[1][1] xlim = (0.7, 1.35) ylim = (-0.15, 0.4) xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 71), np.linspace(ylim[0], ylim[1], 81)) Z = clf.predict(np.c_[yy.ravel(), xx.ravel()]) Z = Z.reshape(xx.shape) #---------------------------------------------------------------------- # plot the results fig = plt.figure(figsize=(8, 4)) fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0, left=0.1, right=0.95, wspace=0.2) # left plot: data and decision boundary ax = fig.add_subplot(121) im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:], s=4, lw=0, cmap=plt.cm.Oranges, zorder=2) im.set_clim(-0.5, 1) im = ax.imshow(Z, origin='lower', aspect='auto', cmap=plt.cm.binary, zorder=1, extent=xlim + ylim) im.set_clim(0, 2) ax.contour(xx, yy, Z, [0.5], colors='k') ax.set_xlim(xlim) ax.set_ylim(ylim) ax.set_xlabel('$u-g$') ax.set_ylabel('$g-r$') ax.text(0.02, 0.02, "k = %i" % kvals[1], transform=ax.transAxes) # plot completeness vs Ncolors ax = fig.add_subplot(222) ax.plot(Ncolors, completeness[0], 'o-k', ms=6, label='k=%i' % kvals[0]) ax.plot(Ncolors, completeness[1], '^--k', ms=6, label='k=%i' % kvals[1]) ax.xaxis.set_major_locator(plt.MultipleLocator(1)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.2)) ax.xaxis.set_major_formatter(plt.NullFormatter()) ax.set_ylabel('completeness') ax.set_xlim(0.5, 4.5) ax.set_ylim(-0.1, 1.1) ax.grid(True) # plot contamination vs Ncolors ax = fig.add_subplot(224) ax.plot(Ncolors, contamination[0], 'o-k', ms=6, label='k=%i' % kvals[0]) ax.plot(Ncolors, contamination[1], '^--k', ms=6, label='k=%i' % kvals[1]) ax.legend(loc='lower right', bbox_to_anchor=(1.0, 0.79)) ax.xaxis.set_major_locator(plt.MultipleLocator(1)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.2)) ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i')) ax.set_xlabel('N colors') ax.set_ylabel('contamination') ax.set_xlim(0.5, 4.5) ax.set_ylim(-0.1, 1.1) ax.grid(True) plt.show() ###Output completeness [[0.22627737 0.3649635 0.46715328 0.54014599] [0.00729927 0.23357664 0.40145985 0.53284672]] contamination [[0.78014184 0.53271028 0.44347826 0.41732283] [0.875 0.44827586 0.24657534 0.23958333]]
trading-system-analysis.ipynb
###Markdown Fixing Null N ValuesWe use the negative trades as the base to set the N value. This value will be used in both positive and negative trades, following System Quality Number rule from Van K Tharp. ###Code negative_trades = df_clean.query('Result < 0 or Result == 0') n_value = negative_trades['Result'].sum()/negative_trades.shape[0] n_value.round(0) def set_null_nvalues(x): if x < 0 or x == 0: if x != 'NaN': return -(x / n_value) else: if x != 'NaN': return (x / -n_value) df_clean['N'] = df_clean['Result'].map(set_null_nvalues) df_clean.sample(10) df_clean.info() df_clean['Result'].describe() ###Output _____no_output_____ ###Markdown Metrics ###Code # Sharpe index df_clean['Result'].mean()/df_clean['Result'].std() # Profit factor pos = df_clean.query("Result > 0") neg = df_clean.query("Result < 0 or Result == 0") pos['Result'].sum()/-(neg['Result'].sum()) # Risk Reward rr = pos['Result'].mean()/-neg['Result'].mean() rr # Minimum Risk Reward w = df_clean.query("Result > 0").count()/df_clean.shape[0] w = pd.to_numeric(w) (1 - w)/w # Mathematical Expectation # Expected Reward # Edge edge = w - (1/(1+rr)) edge # Kelly w - (1-w)/rr ###Output _____no_output_____ ###Markdown System Quality Number ###Code # SQN import math r = (negative_trades.loc[:, "Result"].mean()) expectancy = (df_clean.loc[:, "Result"] / (-r)).mean() r_multiple = (df_clean.loc[:, "Result"] / (-r)) standard_deviation = r_multiple.std() square_root_num_trades = math.sqrt(len(df_clean)) sqn = round(((expectancy/standard_deviation)*square_root_num_trades), 3) sqn ###Output _____no_output_____ ###Markdown Ruin Risk Simple Way ###Code # Negative Trades Mean negative_trades = df_clean.query('Result < 0 or Result == 0') n_value = negative_trades['Result'].sum()/negative_trades.shape[0] n_value.round(0) # Convert to $ number_contracts = 1 n_value = n_value*(number_contracts*0.2) n_value # Total loss sequence in $ to reach 30% of total capital loss. total_capital = 1000 # Risk per trade in $ risk_per_trade = 40 u = (total_capital*0.3)/risk_per_trade u #ruin_risk = ((1-edge)/(1+edge))^u ###Output _____no_output_____ ###Markdown Visualizations ###Code import matplotlib.pyplot as plt import seaborn as sb %matplotlib inline # Trading Block Evolution binsize = 10 bins = np.arange(0, df['Soma'].max()+binsize, binsize) def graph(): plt.plot(df.index, df['Soma']) plt.figure(figsize=[15,8]) plt.title('Trading Block Evolution') plt.ylabel('Points Sum') plt.xlabel('Number of Trades') plt.show(graph()); # Replace , with . and, set as float column N df_clean['N'] = df_clean['N'].str.replace(',','.') df_clean['N'] = df_clean['N'].astype(float) df_clean.sample(5) # Trades N distribution binsize = 0.05 bins = np.arange(0, df_clean['N'].max()+binsize, binsize) def graph(): plt.hist(df_clean['N']) #plt.hist(data = df_clean, alpha=0.8, facecolor='y', x = 'N', bins = bins) plt.figure(figsize=[15,8]) plt.title('N Distribution') plt.ylabel('Total Observations') plt.xlabel('N value') plt.xlim([-1, 1]) plt.show(graph()); ###Output _____no_output_____
jupyter/Cloud Pak for Data v3.5.x/Sched Square.ipynb
###Markdown Sched SquareThis tutorial includes everything you need to set up decision optimization engines, build constraint programming models. Table of contents:- [Describe the business problem](Describe-the-business-problem)* [How decision optimization (prescriptive analytics) can help](How--decision-optimization-can-help)* [Use decision optimization](Use-decision-optimization) * [Step 1: Model the Data](Step-1:-Model-the-data) * [Step 2: Set up the prescriptive model](Step-2:-Set-up-the-prescriptive-model) * [Define the decision variables](Define-the-decision-variables) * [Express the business constraints](Express-the-business-constraints) * [Express the search phase](Express-the-search-phase) * [Solve with Decision Optimization solve service](Solve-with-Decision-Optimization-solve-service) * [Step 3: Investigate the solution and run an example analysis](Step-3:-Investigate-the-solution-and-then-run-an-example-analysis)* [Summary](Summary)**** Describe the business problem* The aim of the square example is to place a set of small squares of different sizes into a large square. ***** How decision optimization can help* Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes. * Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. * Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. + For example: + Automate complex decisions and trade-offs to better manage limited resources. + Take advantage of a future opportunity or mitigate a future risk. + Proactively update recommendations based on changing events. + Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes. Use decision optimization Step 1: Model the data ###Code from docplex.cp.model import * ###Output _____no_output_____ ###Markdown Size of the englobing square ###Code SIZE_SQUARE = 112 ###Output _____no_output_____ ###Markdown Sizes of the sub-squares ###Code SIZE_SUBSQUARE = [50, 42, 37, 35, 33, 29, 27, 25, 24, 19, 18, 17, 16, 15, 11, 9, 8, 7, 6, 4, 2] ###Output _____no_output_____ ###Markdown Step 2: Set up the prescriptive model ###Code mdl = CpoModel(name="SchedSquare") ###Output _____no_output_____ ###Markdown Define the decision variables Create array of variables for sub-squares ###Code x = [] y = [] rx = pulse((0, 0), 0) ry = pulse((0, 0), 0) for i in range(len(SIZE_SUBSQUARE)): sq = SIZE_SUBSQUARE[i] vx = interval_var(size=sq, name="X" + str(i)) vx.set_end((0, SIZE_SQUARE)) x.append(vx) rx += pulse(vx, sq) vy = interval_var(size=sq, name="Y" + str(i)) vy.set_end((0, SIZE_SQUARE)) y.append(vy) ry += pulse(vy, sq) ###Output _____no_output_____ ###Markdown Express the business constraints Create dependencies between variables ###Code for i in range(len(SIZE_SUBSQUARE)): for j in range(i): mdl.add((end_of(x[i]) <= start_of(x[j])) | (end_of(x[j]) <= start_of(x[i])) | (end_of(y[i]) <= start_of(y[j])) | (end_of(y[j]) <= start_of(y[i]))) ###Output _____no_output_____ ###Markdown Set other constraints ###Code mdl.add(always_in(rx, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE)) mdl.add(always_in(ry, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE)) ###Output _____no_output_____ ###Markdown Express the search phase ###Code mdl.set_search_phases([search_phase(x), search_phase(y)]) ###Output _____no_output_____ ###Markdown Solve with Decision Optimization solve service ###Code msol = mdl.solve(TimeLimit=20) ###Output _____no_output_____ ###Markdown Step 3: Investigate the solution and then run an example analysis Print Solution ###Code print("Solution: ") msol.print_solution() ###Output _____no_output_____ ###Markdown Import graphical tools ###Code import docplex.cp.utils_visu as visu import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown *You can set __POP\_UP\_GRAPHIC=True__ if you prefer a pop up graphic window instead of an inline one.* ###Code POP_UP_GRAPHIC=False if msol and visu.is_visu_enabled(): import matplotlib.cm as cm from matplotlib.patches import Polygon if not POP_UP_GRAPHIC: %matplotlib inline # Plot external square print("Plotting squares....") fig, ax = plt.subplots() plt.plot((0, 0), (0, SIZE_SQUARE), (SIZE_SQUARE, SIZE_SQUARE), (SIZE_SQUARE, 0)) for i in range(len(SIZE_SUBSQUARE)): # Display square i (sx, sy) = (msol.get_var_solution(x[i]), msol.get_var_solution(y[i])) (sx1, sx2, sy1, sy2) = (sx.get_start(), sx.get_end(), sy.get_start(), sy.get_end()) poly = Polygon([(sx1, sy1), (sx1, sy2), (sx2, sy2), (sx2, sy1)], fc=cm.Set2(float(i) / len(SIZE_SUBSQUARE))) ax.add_patch(poly) # Display identifier of square i at its center ax.text(float(sx1 + sx2) / 2, float(sy1 + sy2) / 2, str(SIZE_SUBSQUARE[i]), ha='center', va='center') plt.margins(0) plt.show() ###Output _____no_output_____
hikyuu/examples/notebook/007-SystemDetails.ipynb
###Markdown 示例:通道突破系统当价格突破20日高点时买入,当价格低于10日低点时卖出。 ###Code #创建一个从2001年1月1日开始的账户,初始资金20万元 my_tm = crtTM(Datetime(200101010000), 200000) my_sys = SYS_Simple(tm=my_tm) def TurtleSG(self): n1 = self.get_param("n1") n2 = self.get_param("n2") k = self.to c = CLOSE(k) h = REF(HHV(c, n1), 1) #前n日高点 L = REF(LLV(c, n2), 1) #前n日低点 for i in range(h.discard, len(k)): if (c[i] >= h[i]): self._add_buy_signal(k[i].datetime) elif (c[i] <= L[i]): self._add_sell_signal(k[i].datetime) my_sg = crtSG(TurtleSG, {'n1': 20, 'n2': 10}, 'TurtleSG') my_mm = MM_FixedCount(1000) s = sm['sz000001'] query = Query(Datetime(200101010000), Datetime(201705010000)) my_sys.mm = my_mm my_sys.sg = my_sg my_sys.run(s, query) calendar = sm.get_trading_calendar(query, 'SZ') calendar x1 = my_tm.get_funds_curve(calendar, Query.DAY) PRICELIST(x1).plot() my_sys.mm = MM_FixedPercent(0.03) my_sys.run(s, query) x2 = my_tm.get_funds_curve(calendar, Query.DAY) PRICELIST(x2).plot() my_sys.mm = MM_FixedRisk(1000) my_sys.run(s, query) x3 = my_tm.get_funds_curve(calendar, Query.DAY) PRICELIST(x3).plot() my_sys.mm = MM_FixedCapital(1000) my_sys.run(s, query) x4 = my_tm.get_funds_curve(calendar, Query.DAY) PRICELIST(x4).plot() ax = create_figure(1) def x_plot(x, name, ax): px = PRICELIST(x) px.name = name px.plot(axes=ax, legend_on=True) x_plot(x1, 'MM_FixedCount', ax) x_plot(x2, 'MM_FixedPercent', ax) x_plot(x3, 'MM_FixedRisk', ax) x_plot(x3, 'MM_FixedCapital', ax) ###Output _____no_output_____ ###Markdown 示例:通道突破系统当价格突破20日高点时买入,当价格低于10日低点时卖出。 ###Code #创建一个从2001年1月1日开始的账户,初始资金20万元 my_tm = crtTM(Datetime(200101010000), 200000) my_sys = SYS_Simple(tm=my_tm) def TurtleSG(self): n1 = self.get_param("n1") n2 = self.get_param("n2") k = self.to c = CLOSE(k) h = REF(HHV(c, n1), 1) #前n日高点 L = REF(LLV(c, n2), 1) #前n日低点 for i in range(h.discard, len(k)): if (c[i] >= h[i]): self._add_buy_signal(k[i].datetime) elif (c[i] <= L[i]): self._add_sell_signal(k[i].datetime) my_sg = crtSG(TurtleSG, {'n1': 20, 'n2': 10}, 'TurtleSG') my_mm = MM_FixedCount(1000) s = sm['sz000001'] query = Query(Datetime(200101010000), Datetime(201705010000)) my_sys.mm = my_mm my_sys.sg = my_sg my_sys.run(s, query) calendar = sm.get_trading_calendar(query, 'SZ') calendar x1 = my_tm.get_funds_curve(calendar, Query.DAY) PRICELIST(x1).plot() my_sys.mm = MM_FixedPercent(0.03) my_sys.run(s, query) x2 = my_tm.get_funds_curve(calendar, Query.DAY) PRICELIST(x2).plot() my_sys.mm = MM_FixedRisk(1000) my_sys.run(s, query) x3 = my_tm.get_funds_curve(calendar, Query.DAY) PRICELIST(x3).plot() my_sys.mm = MM_FixedCapital(1000) my_sys.run(s, query) x4 = my_tm.get_funds_curve(calendar, Query.DAY) PRICELIST(x4).plot() ax = create_figure(1) def x_plot(x, name, ax): px = PRICELIST(x) px.name = name px.plot(axes=ax, legend_on=True) x_plot(x1, 'MM_FixedCount', ax) x_plot(x2, 'MM_FixedPercent', ax) x_plot(x3, 'MM_FixedRisk', ax) x_plot(x3, 'MM_FixedCapital', ax) ###Output _____no_output_____ ###Markdown 示例:通道突破系统当价格突破20日高点时买入,当价格低于10日低点时卖出。 ###Code #创建一个从2001年1月1日开始的账户,初始资金20万元 my_tm = crtTM(Datetime(200101010000), 200000) my_sys = SYS_Simple(tm=my_tm) def TurtleSG(self): n1 = self.getParam("n1") n2 = self.getParam("n2") k = self.getTO() c = CLOSE(k) h = REF(HHV(c, n1), 1) #前n日高点 L = REF(LLV(c, n2), 1) #前n日低点 for i in range(h.discard, len(k)): if (c[i] >= h[i]): self._addBuySignal(k[i].datetime) elif (c[i] <= L[i]): self._addSellSignal(k[i].datetime) my_sg = crtSG(TurtleSG, {'n1': 20, 'n2': 10}, 'TurtleSG') my_mm = MM_FixedCount(1000) s = sm['sz000001'] query = QueryByDate(Datetime(200101010000), Datetime(201705010000)) my_sys.mm = my_mm my_sys.sg = my_sg my_sys.run(s, query) calendar = sm.getTradingCalendar(query, 'SZ') calendar x1 = my_tm.getFundsCurve(calendar, Query.DAY) PRICELIST(x1).plot() my_sys.mm = MM_FixedPercent(0.03) my_sys.run(s, query) x2 = my_tm.getFundsCurve(calendar, Query.DAY) PRICELIST(x2).plot() my_sys.mm = MM_FixedRisk(1000) my_sys.run(s, query) x3 = my_tm.getFundsCurve(calendar, Query.DAY) PRICELIST(x3).plot() my_sys.mm = MM_FixedCapital(1000) my_sys.run(s, query) x4 = my_tm.getFundsCurve(calendar, Query.DAY) PRICELIST(x4).plot() ax = create_figure(1) def x_plot(x, name, ax): px = PRICELIST(x) px.name = name px.plot(axes=ax, legend_on=True) x_plot(x1, 'MM_FixedCount', ax) x_plot(x2, 'MM_FixedPercent', ax) x_plot(x3, 'MM_FixedRisk', ax) x_plot(x3, 'MM_FixedCapital', ax) ###Output _____no_output_____
binder/vector-laplacian.ipynb
###Markdown Vector Laplacian in curvilinear coordinatesThe vector Laplacian is$$\nabla^2 \vec{u} = \nabla \cdot \nabla \vec{u}$$A vector identity gives the vector Laplacian as$$\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}$$We will check if this identity holds for shenfun using both cylindrical and spherical coordinates.For reference, the vector Laplacian is given [here](https://en.wikipedia.org/wiki/Del_in_cylindrical_and_spherical_coordinates)Cylinder coordinates are mapped to Cartesian through$$\begin{align*}x &= r \cos \theta \\y &= r \sin \theta \\z &= z\end{align*}$$and we use a domain $(r, \theta, z) \in [0, 1] \times [0, 2 \pi] \times [0, 2 \pi]$.Spherical coordinates are mapped as$$\begin{align*}x &= r \sin(\theta) \cos(\phi)\\y &= r \sin(\theta) \sin(\phi)\\z &= r \cos(\theta)\end{align*}$$for a domain $(r, \theta, \phi) \in [0, 1] \times [0, \pi] \times [0, 2 \pi]$.This is all we need to know for using these coordinate systems with shenfun. Cylinder coordinates ###Code from shenfun import * from IPython.display import Math import sympy as sp config['basisvectors'] = 'normal' #'covariant' # or r, theta, z = psi = sp.symbols('x,y,z', real=True, positive=True) rv = (r*sp.cos(theta), r*sp.sin(theta), z) N = 10 F0 = FunctionSpace(N, 'F', dtype='d') F1 = FunctionSpace(N, 'F', dtype='D') L = FunctionSpace(N, 'L', domain=(0, 1)) T = TensorProductSpace(comm, (L, F1, F0), coordinates=(psi, rv)) V = VectorSpace(T) u = TrialFunction(V) du = div(u) Math(du.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) du.tosympy(basis=(r*sp.cos(theta), sp.sin(theta), z), psi=psi) ###Output _____no_output_____ ###Markdown The vector Laplacian can now be found as ###Code du = div(grad(u)) #Math((div(grad(TrialFunction(T)))).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) T.coors.sg ###Output _____no_output_____ ###Markdown We can look at `du` using the following ###Code Math((du).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) ###Output _____no_output_____ ###Markdown Note that the basis vectors $\mathbf{b}_i$ are not unit vectors (i.e., of length 1). For this reason the equation does not look exactly like the one [here](https://en.wikipedia.org/wiki/Del_in_cylindrical_and_spherical_coordinates). The basis vectors are ###Code Math(T.coors.latex_basis_vectors(symbol_names={r: 'r', theta: '\\theta', z: 'z'})) ###Output _____no_output_____ ###Markdown Notice that $|\mathbf{b}_{\theta}|=r$. Shenfun can use either non-normalized covariant basis vectors or normalized (physical) basis vectors of lenght 1 for describing all vectors and higher order tensors. The vector components components shown are contraviariant and as such use a superscript $u^{\theta}$ and not subscript $u_{\theta}$. Note that for orthogonal coordinates the scaled unit vectors are the same for either contra- or covariant basis vectors and as such this distinction is not necessary here. The distinction is only required for non-orthogonal coordinate systems. Shenfun can handle both orthogonal and non-orthogonal coordinates, but requires that equations to be solved are separable. Now check the vector identity$$\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}$$ ###Code dv = grad(div(u)) - curl(curl(u)) dv.simplify() Math((dv).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) ###Output _____no_output_____ ###Markdown We see that the order is different, but the vector is actually identical to the previous one (du). To show that they are equal we can subtract one from the other and simplify. ###Code dw = du-dv dw.simplify() Math(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) ###Output _____no_output_____ ###Markdown If you are not convinced we can assemble some matrices and check that `du` and `dv` behave the same way. ###Code v = TestFunction(V) A0 = inner(v, du) A1 = inner(v, dv) ###Output _____no_output_____ ###Markdown `A0` and `A1` now contains lists of tensor product matrices, because the vector identities contain a lot of different terms (as we have seen above). To check that `A0` and `A1` are identical, we test the vector product of the matrices with a random vector. Since we are working with vectors we use a `BlockMatrix` for the combined tensor product matrices. ###Code u_hat = Function(V) u_hat[:] = np.random.random(u_hat.shape) + np.random.random(u_hat.shape)*1j a0 = BlockMatrix(A0) a1 = BlockMatrix(A1) b0 = Function(V) b1 = Function(V) b0 = a0.matvec(u_hat, b0) b1 = a1.matvec(u_hat, b1) print('Error ', np.linalg.norm(b0-b1)) ###Output _____no_output_____ ###Markdown Spherical coordinatesWe now turn to spherical coordinates and run the same test. ###Code r, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True) rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta)) N = 6 F = FunctionSpace(N, 'F', dtype='d') L0 = FunctionSpace(N, 'L', domain=(0, 1)) L1 = FunctionSpace(N, 'L', domain=(0, np.pi)) T = TensorProductSpace(comm, (L0, L1, F), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta)))) V = VectorSpace(T) u = TrialFunction(V) du = div(grad(u)) dv = grad(div(u)) - curl(curl(u)) dv.simplify() dw = du-dv dw.simplify() Math(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'})) ###Output _____no_output_____ ###Markdown This proves that for shenfun the vector identity $\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}$ holds true also for spherical coordinates. ###Code Math(du.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'})) Math(T.coors.latex_basis_vectors(symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'})) Math((grad(u)).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'})) Math((grad(u)[0]).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'})) ###Output _____no_output_____ ###Markdown Vector Laplacian in curvilinear coordinatesThe vector Laplacian is$$\nabla^2 \vec{u} = \nabla \cdot \nabla \vec{u}$$A vector identity gives the vector Laplacian as$$\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}$$We will check if this identity holds for shenfun using both cylindrical and spherical coordinates.For reference, the vector Laplacian is given [here](https://en.wikipedia.org/wiki/Del_in_cylindrical_and_spherical_coordinates)Cylinder coordinates are mapped to Cartesian through$$\begin{align*}x &= r \cos \theta \\y &= r \sin \theta \\z &= z\end{align*}$$and we use a domain $(r, \theta, z) \in [0, 1] \times [0, 2 \pi] \times [0, 2 \pi]$.Spherical coordinates are mapped as$$\begin{align*}x &= r \sin(\theta) \cos(\phi)\\y &= r \sin(\theta) \sin(\phi)\\z &= r \cos(\theta)\end{align*}$$for a domain $(r, \theta, \phi) \in [0, 1] \times [0, \pi] \times [0, 2 \pi]$.This is all we need to know for using these coordinate systems with shenfun. Cylinder coordinates ###Code from shenfun import * from IPython.display import Math import sympy as sp r, theta, z = psi = sp.symbols('x,y,z', real=True, positive=True) rv = (r*sp.cos(theta), r*sp.sin(theta), z) N = 10 F0 = FunctionSpace(N, 'F', dtype='d') F1 = FunctionSpace(N, 'F', dtype='D') L = FunctionSpace(N, 'L', domain=(0, 1)) T = TensorProductSpace(comm, (L, F1, F0), coordinates=(psi, rv)) V = VectorSpace(T) u = TrialFunction(V) du = div(u) Math(du.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) du.tosympy(basis=(r*sp.cos(theta), sp.sin(theta), z), psi=psi) ###Output _____no_output_____ ###Markdown The vector Laplacian can now be found as ###Code du = div(grad(u)) ###Output _____no_output_____ ###Markdown We can look at `du` using the following ###Code Math((du).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) ###Output _____no_output_____ ###Markdown Note that the basis vectors $\mathbf{b}_i$ are not unit vectors (i.e., of length 1). For this reason the equation does not look exactly like the one [here](https://en.wikipedia.org/wiki/Del_in_cylindrical_and_spherical_coordinates). The basis vectors are ###Code Math(T.coors.latex_basis_vectors(covariant=True, symbol_names={r: 'r', theta: '\\theta', z: 'z'})) ###Output _____no_output_____ ###Markdown Notice that $|\mathbf{b}_{\theta}|=r$. Shenfun uses non-normalized covariant basis vectors for describing all vectors and higher order tensors. The vector components are contraviariant and as such use a superscript $u^{\theta}$ and not subscript $u_{\theta}$. Note that for orthogonal coordinates the scaled unit vectors are the same for either contra- or covariant basis vectors and as such this distinction is not necessary here. The distinction is only required for non-orthogonal coordinate systems. Shenfun can handle both orthogonal and non-orthogonal coordinates, but requires that equations to be solved are separable. Now check the vector identity$$\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}$$ ###Code dv = grad(div(u)) - curl(curl(u)) dv.simplify() Math((dv).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) ###Output _____no_output_____ ###Markdown We see that the order is different, but the vector is actually identical to the previous one (du). To show that they are equal we can subtract one from the other and simplify. ###Code dw = du-dv dw.simplify() Math(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'})) ###Output _____no_output_____ ###Markdown If you are not convinced we can assemble some matrices and check that `du` and `dv` behave the same way. ###Code v = TestFunction(V) A0 = inner(v, du) A1 = inner(v, dv) ###Output _____no_output_____ ###Markdown `A0` and `A1` now contains lists of tensor product matrices, because the vector identities contain a lot of different terms (as we have seen above). To check that `A0` and `A1` are identical, we test the vector product of the matrices with a random vector. Since we are working with vectors we use a `BlockMatrix` for the combined tensor product matrices. ###Code u_hat = Function(V) u_hat[:] = np.random.random(u_hat.shape) + np.random.random(u_hat.shape)*1j a0 = BlockMatrix(A0) a1 = BlockMatrix(A1) b0 = Function(V) b1 = Function(V) b0 = a0.matvec(u_hat, b0) b1 = a1.matvec(u_hat, b1) print('Error ', np.linalg.norm(b0-b1)) ###Output _____no_output_____ ###Markdown Spherical coordinatesWe now turn to spherical coordinates and run the same test. ###Code r, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True) rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta)) N = 6 F = FunctionSpace(N, 'F', dtype='d') L0 = FunctionSpace(N, 'L', domain=(0, 1)) L1 = FunctionSpace(N, 'L', domain=(0, np.pi)) T = TensorProductSpace(comm, (L0, L1, F), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta)))) V = VectorSpace(T) u = TrialFunction(V) du = div(grad(u)) dv = grad(div(u)) - curl(curl(u)) dv.simplify() dw = du-dv dw.simplify() Math(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'})) ###Output _____no_output_____
2020/claudia/2020_12_05/code.ipynb
###Markdown Day 5 ###Code total_ident = set() with open('input.txt', 'r') as fd: for line in fd: line = line.strip() counter_row = [0, 127] counter_col = [0, 7] for i in line[:7]: if i == 'B': counter_row = [(counter_row[1]+counter_row[0])//2, counter_row[1]] elif i == 'F': counter_row = [counter_row[0], (counter_row[1]+counter_row[0])//2] for i in line[7:]: if i == 'R': counter_col = [(counter_col[1]+counter_col[0])//2, counter_col[1]] elif i == 'L': counter_col = [counter_col[0], (counter_col[1]+counter_col[0])//2] total_ident.add(counter_row[1] * 8 + counter_col[1]) max(total_ident) expected_ident = set(range(min(total_ident), max(total_ident) + 1)) expected_ident.difference(total_ident) ###Output _____no_output_____
TradingAI/Quantitative Trading/Lesson 24 - Risk Factor Models/portfolio_variance.ipynb
###Markdown Portfolio Variance ###Code import sys !{sys.executable} -m pip install -r requirements.txt import numpy as np import pandas as pd import time import os import quiz_helper import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') plt.rcParams['figure.figsize'] = (14, 8) ###Output _____no_output_____ ###Markdown data bundle ###Code import os import quiz_helper from zipline.data import bundles os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..','data','module_4_quizzes_eod') ingest_func = bundles.csvdir.csvdir_equities(['daily'], quiz_helper.EOD_BUNDLE_NAME) bundles.register(quiz_helper.EOD_BUNDLE_NAME, ingest_func) print('Data Registered') ###Output Data Registered ###Markdown Build pipeline engine ###Code from zipline.pipeline import Pipeline from zipline.pipeline.factors import AverageDollarVolume from zipline.utils.calendars import get_calendar universe = AverageDollarVolume(window_length=120).top(500) trading_calendar = get_calendar('NYSE') bundle_data = bundles.load(quiz_helper.EOD_BUNDLE_NAME) engine = quiz_helper.build_pipeline_engine(bundle_data, trading_calendar) ###Output _____no_output_____ ###Markdown View Data¶With the pipeline engine built, let's get the stocks at the end of the period in the universe we're using. We'll use these tickers to generate the returns data for the our risk model. ###Code universe_end_date = pd.Timestamp('2016-01-05', tz='UTC') universe_tickers = engine\ .run_pipeline( Pipeline(screen=universe), universe_end_date, universe_end_date)\ .index.get_level_values(1)\ .values.tolist() universe_tickers len(universe_tickers) from zipline.data.data_portal import DataPortal data_portal = DataPortal( bundle_data.asset_finder, trading_calendar=trading_calendar, first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day, equity_minute_reader=None, equity_daily_reader=bundle_data.equity_daily_bar_reader, adjustment_reader=bundle_data.adjustment_reader) ###Output _____no_output_____ ###Markdown Get pricing data helper function ###Code from quiz_helper import get_pricing ###Output _____no_output_____ ###Markdown get pricing data into a dataframe ###Code returns_df = \ get_pricing( data_portal, trading_calendar, universe_tickers, universe_end_date - pd.DateOffset(years=5), universe_end_date)\ .pct_change()[1:].fillna(0) #convert prices into returns returns_df ###Output _____no_output_____ ###Markdown Let's look at a two stock portfolioLet's pretend we have a portfolio of two stocks. We'll pick Apple and Microsoft in this example. ###Code aapl_col = returns_df.columns[3] msft_col = returns_df.columns[312] asset_return_1 = returns_df[aapl_col].rename('asset_return_aapl') asset_return_2 = returns_df[msft_col].rename('asset_return_msft') asset_return_df = pd.concat([asset_return_1,asset_return_2],axis=1) asset_return_df.head(2) ###Output _____no_output_____ ###Markdown Factor returnsLet's make up a "factor" by taking an average of all stocks in our list. You can think of this as an equal weighted index of the 490 stocks, kind of like a measure of the "market". We'll also make another factor by calculating the median of all the stocks. These are mainly intended to help us generate some data to work with. We'll go into how some common risk factors are generated later in the lessons.Also note that we're setting axis=1 so that we calculate a value for each time period (row) instead of one value for each column (assets). ###Code factor_return_1 = returns_df.mean(axis=1) factor_return_2 = returns_df.median(axis=1) factor_return_l = [factor_return_1, factor_return_2] ###Output _____no_output_____ ###Markdown Factor exposuresFactor exposures refer to how "exposed" a stock is to each factor. We'll get into this more later. For now, just think of this as one number for each stock, for each of the factors. ###Code from sklearn.linear_model import LinearRegression """ For now, just assume that we're calculating a number for each stock, for each factor, which represents how "exposed" each stock is to each factor. We'll discuss how factor exposure is calculated later in the lessons. """ def get_factor_exposures(factor_return_l, asset_return): lr = LinearRegression() X = np.array(factor_return_l).T y = np.array(asset_return.values) lr.fit(X,y) return lr.coef_ factor_exposure_l = [] for i in range(len(asset_return_df.columns)): factor_exposure_l.append( get_factor_exposures(factor_return_l, asset_return_df[asset_return_df.columns[i]] )) factor_exposure_a = np.array(factor_exposure_l) print(f"factor_exposures for asset 1 {factor_exposure_a[0]}") print(f"factor_exposures for asset 2 {factor_exposure_a[1]}") ###Output factor_exposures for asset 1 [ 1.35101534 -0.58353198] factor_exposures for asset 2 [-0.2283345 1.16364007] ###Markdown Variance of stock 1Calculate the variance of stock 1. $\textrm{Var}(r_{1}) = \beta_{1,1}^2 \textrm{Var}(f_{1}) + \beta_{1,2}^2 \textrm{Var}(f_{2}) + 2\beta_{1,1}\beta_{1,2}\textrm{Cov}(f_{1},f_{2}) + \textrm{Var}(s_{1})$ ###Code factor_exposure_1_1 = factor_exposure_a[0][0] factor_exposure_1_2 = factor_exposure_a[0][1] common_return_1 = factor_exposure_1_1 * factor_return_1 + factor_exposure_1_2 * factor_return_2 specific_return_1 = asset_return_1 - common_return_1 covm_f1_f2 = np.cov(factor_return_1,factor_return_2,ddof=1) #this calculates a covariance matrix # get the variance of each factor, and covariances from the covariance matrix covm_f1_f2 var_f1 = covm_f1_f2[0,0] var_f2 = covm_f1_f2[1,1] cov_f1_f2 = covm_f1_f2[0][1] # calculate the specific variance. var_s_1 = np.var(specific_return_1,ddof=1) # calculate the variance of asset 1 in terms of the factors and specific variance var_asset_1 = (factor_exposure_1_1**2 * var_f1) + \ (factor_exposure_1_2**2 * var_f2) + \ 2 * (factor_exposure_1_1 * factor_exposure_1_2 * cov_f1_f2) + \ var_s_1 print(f"variance of asset 1: {var_asset_1:.8f}") ###Output variance of asset 1: 0.00028209 ###Markdown Variance of stock 2Calculate the variance of stock 2. $\textrm{Var}(r_{2}) = \beta_{2,1}^2 \textrm{Var}(f_{1}) + \beta_{2,2}^2 \textrm{Var}(f_{2}) + 2\beta_{2,1}\beta_{2,2}\textrm{Cov}(f_{1},f_{2}) + \textrm{Var}(s_{2})$ ###Code factor_exposure_2_1 = factor_exposure_a[1][0] factor_exposure_2_2 = factor_exposure_a[1][1] common_return_2 = factor_exposure_2_1 * factor_return_1 + factor_exposure_2_2 * factor_return_2 specific_return_2 = asset_return_2 - common_return_2 # Notice we already calculated the variance and covariances of the factors # calculate the specific variance of asset 2 var_s_2 = np.var(specific_return_2,ddof=1) # calcualte the variance of asset 2 in terms of the factors and specific variance var_asset_2 = (factor_exposure_2_1**2 * var_f1) + \ (factor_exposure_2_2**2 * var_f2) + \ (2 * factor_exposure_2_1 * factor_exposure_2_2 * cov_f1_f2) + \ var_s_2 print(f"variance of asset 2: {var_asset_2:.8f}") ###Output variance of asset 2: 0.00021856 ###Markdown Covariance of stocks 1 and 2Calculate the covariance of stock 1 and 2. $\textrm{Cov}(r_{1},r_{2}) = \beta_{1,1}\beta_{2,1}\textrm{Var}(f_{1}) + \beta_{1,1}\beta_{2,2}\textrm{Cov}(f_{1},f_{2}) + \beta_{1,2}\beta_{2,1}\textrm{Cov}(f_{1},f_{2}) + \beta_{1,2}\beta_{2,2}\textrm{Var}(f_{2})$ ###Code # TODO: calculate the covariance of assets 1 and 2 in terms of the factors cov_asset_1_2 = (factor_exposure_1_1 * factor_exposure_2_1 * var_f1) + \ (factor_exposure_1_1 * factor_exposure_2_2 * cov_f1_f2) + \ (factor_exposure_1_2 * factor_exposure_2_1 * cov_f1_f2) + \ (factor_exposure_1_2 * factor_exposure_2_2 * var_f2) print(f"covariance of assets 1 and 2: {cov_asset_1_2:.8f}") ###Output covariance of assets 1 and 2: 0.00007133 ###Markdown Quiz 1: calculate portfolio varianceWe'll choose stock weights for now (in a later lesson, you'll learn how to use portfolio optimization that uses alpha factors and a risk factor model to choose stock weights).$\textrm{Var}(r_p) = x_{1}^{2} \textrm{Var}(r_1) + x_{2}^{2} \textrm{Var}(r_2) + 2x_{1}x_{2}\textrm{Cov}(r_{1},r_{2})$ ###Code weight_1 = 0.60 weight_2 = 0.40 # TODO: calculate portfolio variance var_portfolio = weight_1**2 * var_asset_1 + weight_2**2 * var_asset_2 + \ 2 * weight_1 * weight_2 * cov_asset_1_2 print(f"variance of portfolio is {var_portfolio:.8f}") ###Output variance of portfolio is 0.00017076 ###Markdown Quiz 2: Do it with Matrices!Create matrices $\mathbf{F}$, $\mathbf{B}$ and $\mathbf{S}$, where $\mathbf{F}= \begin{pmatrix}\textrm{Var}(f_1) & \textrm{Cov}(f_1,f_2) \\ \textrm{Cov}(f_2,f_1) & \textrm{Var}(f_2) \end{pmatrix}$is the covariance matrix of factors, $\mathbf{B} = \begin{pmatrix}\beta_{1,1}, \beta_{1,2}\\ \beta_{2,1}, \beta_{2,2}\end{pmatrix}$ is the matrix of factor exposures, and $\mathbf{S} = \begin{pmatrix}\textrm{Var}(s_i) & 0\\ 0 & \textrm{Var}(s_j)\end{pmatrix}$is the matrix of specific variances. $\mathbf{X} = \begin{pmatrix}x_{1} \\x_{2}\end{pmatrix}$ Concept QuestionWhat are the dimensions of the $\textrm{Var}(r_p)$ portfolio variance? Given this, when choosing whether to multiply a row vector or a column vector on the left and right sides of the $\mathbf{BFB}^T$, which choice helps you get the dimensions of the portfolio variance term?In other words:Given that $\mathbf{X}$ is a column vector, which makes more sense?$\mathbf{X}^T(\mathbf{BFB}^T + \mathbf{S})\mathbf{X}$ ? or $\mathbf{X}(\mathbf{BFB}^T + \mathbf{S})\mathbf{X}^T$ ? Answer 2 here:$\mathbf{X}^T(\mathbf{BFB}^T + \mathbf{S})\mathbf{X}$ ? Quiz 3: Calculate portfolio variance using matrices ###Code # TODO: covariance matrix of factors F = covm_f1_f2 F # TODO: matrix of factor exposures B = factor_exposure_a B # TODO: matrix of specific variances S = np.diag([var_s_1,var_s_2]) S ###Output _____no_output_____ ###Markdown Hint for column vectorsTry using [reshape](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.reshape.html) ###Code # TODO: make a column vector for stock weights matrix X X = np.array([weight_1,weight_2]).reshape(2,1) X # TODO: covariance matrix of assets var_portfolio = X.T.dot(B.dot(F).dot(B.T) + S).dot(X) print(f"portfolio variance is \n{var_portfolio[0][0]:.8f}") ###Output portfolio variance is 0.00017076
H2O_Tutorial.ipynb
###Markdown ###Code ###Output _____no_output_____ ###Markdown **Introduction** H2O is an open source Machine Learning framework with full-tested implementations of several widely-accepted ML algorithms. You just have to pick up the algorithm from its huge repository and apply it to your dataset. It contains the most widely used statistical and ML algorithms.H2O provides an easy-to-use open source platform for applying different ML algorithms on a given dataset. It provides **several statistical and ML algorithms including deep learning.**In this tutorial, we will consider examples and understand how to go about working with H2O.**Audience**This tutorial is designed to help all those learners who are aiming to develop a Machine Learning model on a huge database.Prerequisites---It is assumed that the learner has a basic understanding of Machine Learning and is familiar with Python. **H2O Setup Guide** Have you ever been asked to develop a Machine Learning model on a **huge database**? Typically, the database will provide you and ask you to make certain predictions such as who will be the potential buyers; if there can be an early detection of fraudulent cases, etc. To answer these questions, your task would be to develop a Machine Learning algorithm that would provide an answer to the customer’s query. Developing a Machine Learning algorithm from scratch is not an easy task and why should you do this when there are **several ready-to-use Machine Learning libraries** available in the market.These days, you would rather use these libraries, apply a well-tested algorithm from these libraries and look at its performance. If the performance were not within acceptable limits, you would try to either fine-tune the current algorithm or try an altogether different one.Likewise, you may try multiple algorithms on the same dataset and then pick up the best one that satisfactorily meets the customer’s requirements. This is where H2O comes to your rescue. It is an open source Machine Learning framework with full-tested implementations of several widely-accepted ML algorithms. You just have to pick up the algorithm from its huge repository and apply it to your dataset. It contains the most widely used statistical and ML algorithms.To mention a few here it includes **gradient boosted machines (GBM), generalized linear model (GLM), deep learning and many more**. Not only that it also supports ***AutoML functionality*** that will rank the performance of different algorithms on your dataset, thus reducing your efforts of finding the best performing model. It is an in-memory platform that provides superb performance.To install the H2O on your machine . see this web link [H2O Installation Tutorial](https://www.tutorialspoint.com/h2o/h2o_installation.htm)We will understand how to use this in the command line so that you understand its working line-wise. If you are a Python lover, you may use Jupyter or any other IDE of your choice for developing H2O applications. The H2O also provides a web-based tool to test the different algorithms on your dataset. This is called Flow.The tutorial will introduce you to the use of **Flow**. Alongside, we will discuss the use of **AutoML** that will identify the best performing algorithm on your dataset. Are you not excited to learn H2O? Keep reading! ** H20 provide many in-built ML and Deep Leraing Algorithms. but in this tutorial my foucs to provide AutoML tutorial.** **To use AutoML, start a new Jupyter notebook and follow the steps shown below.** **Importing AutoML** First import H2O and AutoML package into the project using the following two statements − ###Code !pip install -f http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Py.html h2o import h2o from h2o.automl import H2OAutoML ###Output _____no_output_____ ###Markdown **Initialize H2O** Initialize h2o using the following statement − ###Code h2o.init() ###Output Checking whether there is an H2O instance running at http://localhost:54321 ..... not found. Attempting to start a local H2O server... Java Version: openjdk version "11.0.7" 2020-04-14; OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-2ubuntu218.04); OpenJDK 64-Bit Server VM (build 11.0.7+10-post-Ubuntu-2ubuntu218.04, mixed mode, sharing) Starting server from /usr/local/lib/python3.6/dist-packages/h2o/backend/bin/h2o.jar Ice root: /tmp/tmpotwijlgi JVM stdout: /tmp/tmpotwijlgi/h2o_unknownUser_started_from_python.out JVM stderr: /tmp/tmpotwijlgi/h2o_unknownUser_started_from_python.err Server is running at http://127.0.0.1:54321 Connecting to H2O server at http://127.0.0.1:54321 ... successful. ###Markdown **Loading Data** We are using iris.csv dataset.Load the data using the following statement − ###Code from sklearn import datasets data = h2o.import_file('https://gist.githubusercontent.com/btkhimsar/ed560337d8b944832d1c1f55fac093fc/raw/6f9306ad21398ea43cba4f7d537619d0e07d5ae3/iris.csv') data.columns ###Output _____no_output_____ ###Markdown **Preparing Dataset** We need to decide on the features and the prediction columns. We use the same features and the predication column as in our earlier case. Set the features and the output column using the following two statements − ###Code features = ['sepal.length', 'sepal.width', 'petal.length', 'petal.width'] output = 'variety' ###Output _____no_output_____ ###Markdown Split the data in 80:20 ratio for training and testing − ###Code train, test = data.split_frame(ratios=[0.8]) ###Output _____no_output_____ ###Markdown **Applying AutoML** Now, we are all set for applying AutoML on our dataset. The AutoML will run for a fixed amount of time set by us and give us the optimized model. We set up the AutoML using the following statement − ###Code automl = H2OAutoML(max_models = 30, max_runtime_secs=300, seed = 1) ###Output _____no_output_____ ###Markdown The first parameter specifies the number of models that we want to evaluate and compare.The second parameter specifies the time for which the algorithm runs.We now call the train method on the AutoML object as shown here − ###Code automl.train(x =features, y =output, training_frame = train) ###Output AutoML progress: |███████████ 18:52:11.228: Skipping training of model GBM_5_AutoML_20200729_185148 due to exception: water.exceptions.H2OModelBuilderIllegalArgumentException: Illegal argument(s) for GBM model: GBM_5_AutoML_20200729_185148. Details: ERRR on field: _min_rows: The dataset size is too small to split for min_rows=100.0: must have at least 200.0 (weighted) rows, but have only 123.0. █████████████████████████████████████████████| 100% ###Markdown We specify the x as the features array that we created earlier, the y as the output variable to indicate the predicted value and the dataframe as train dataset.Run the code, you will have to wait for 5 minutes (we set the max_runtime_secs to 300) until you get the following output − **Printing the Leaderboard** When the AutoML processing completes, it creates a leaderboard ranking all the 30 algorithms that it has evaluated. To see the first 10 records of the leaderboard, use the following code − ###Code lb = automl.leaderboard lb.head() ###Output _____no_output_____ ###Markdown **Predicting on Test Data** Now, you have the models ranked, you can see the performance of the top-rated model on your test data. To do so, run the following code statement − ###Code preds = automl.predict(test) ###Output glm prediction progress: |████████████████████████████████████████████████| 100% ###Markdown **Printing Result** Print the predicted result using the following statement − ###Code print (preds) ###Output _____no_output_____ ###Markdown **Printing the Ranking for All** If you want to see the ranks of all the tested algorithms, run the following code statement − ###Code lb.head(rows = lb.nrows) ###Output _____no_output_____ ###Markdown **Conclusion** H2O provides an easy-to-use open source platform for applying different ML algorithms on a given dataset. It provides several statistical and ML algorithms including deep learning. During testing, you can fine tune the parameters to these algorithms. You can do so using command-line or the provided web-based interface called Flow. H2O also supports AutoML that provides the ranking amongst the several algorithms based on their performance. H2O also performs well on Big Data. This is definitely a boon for Data Scientist to apply the different Machine Learning models on their dataset and pick up the best one to meet their needs. ###Code ###Output _____no_output_____
Notebooks/5_saturationPlots.ipynb
###Markdown Producing novel_utron saturation plots of assembled UTRons from the simulations - Opens databases corresponding to random groupings of samples- (e.g.) (10.1, 10.2, 10.3) are three repeats using 10 random samples Function below counts the number of utrons in the dataframe returned from the query.- Database name corresponds to the no. of samples assembled Function accepts a "num"- (if num = 1 ) = returns number of transcripts assembled- (if num = 2) = returns number of genes with assembled utrons ###Code import pandas as pd import sqlite3 %pylab inline def count_utrons(database, num): cnx = sqlite3.connect(database) cnx.execute("ATTACH '/shared/sudlab1/General/annotations/hg38_noalt_ensembl85/csvdb' as annotations") if num == 1: query_text1 = ''' SELECT * FROM novel_utrons_ids WHERE track='agg-agg-agg' AND transcript_id like "MSTRG%" ORDER BY transcript_id''' if num == 2: query_text1 = ''' SELECT * FROM novel_utrons_ids AS uid INNER JOIN annotations.transcript_info AS ti ON ti.transcript_id = match_transcript_id WHERE track='agg-agg-agg' AND uid.transcript_id like "MSTRG%" GROUP BY gene_name ORDER BY transcript_id ''' query1 = pd.read_sql_query(query_text1, cnx) num = query1.shape[0] return num # Names of the database name to be opened db_list = ["1.1", "1.2","1.3","1.4","1.5","1.6", "10.1", "10.2", "10.3", "20.1", "20.2", "20.3", "30.1", "30.2", "30.3", "40.1", "40.2", "40.3", "50.1", "50.2", "50.3", "55.1", "55.2", "55.3", "60"] # Number of repeeats in each database (for plotting later) num_samples = [1]*6 + [10]*3 + [20]*3 + [30]*3 + [40]*3 + [50]*3 + [55]*3 + [60] # Get utron counts for each database (transcript count) utron_counts =[] for db in db_list: db_name = "/shared/sudlab1/General/projects/utrons_project/Simulations/Saturation/"+db+".db" a=count_utrons(db_name, 1) utron_counts.append(a) # Get utron counts for each database (gene count) utron_counts2 =[] for db in db_list: db_name = "/shared/sudlab1/General/projects/utrons_project/Simulations/Saturation/"+db+".db" a=count_utrons(db_name, 2) utron_counts2.append(a) # Find means for each set of repeats # Transcript means means = [] means.append(sum(utron_counts[0:6])/6.0); means.append(sum(utron_counts[6:9])/3.0) means.append(sum(utron_counts[9:12])/3.0); means.append(sum(utron_counts[12:15])/3.0); means.append(sum(utron_counts[15:18])/3.0); means.append(sum(utron_counts[18:21])/3.0) means.append(sum(utron_counts[21:24])/3.0); means.append(sum(utron_counts[24])/1.0) # Gene level means means2 = [] means2.append(sum(utron_counts2[0:6])/6.0); means2.append(sum(utron_counts2[6:9])/3.0) means2.append(sum(utron_counts2[9:12])/3.0); means2.append(sum(utron_counts2[12:15])/3.0) means2.append(sum(utron_counts2[15:18])/3.0); means2.append(sum(utron_counts2[18:21])/3.0) means2.append(sum(utron_counts2[21:24])/3.0); means2.append(sum(utron_counts2[24])/1.0) #x-axis values for plotting plotlist = [1,10,20,30,40,50,55,60] # Transcript counts pylab.plot(num_samples, utron_counts, '+', label="TRANSCRIPTS", color='r') # Gene counts pylab.plot(num_samples, utron_counts2, '+', label='GENES', color='b') # Means pylab.plot(plotlist, means, color='r') pylab.plot(plotlist, means2, color='b') pylab.legend(loc=2, fontsize="x-small") pylab.xlabel('Samples'); pylab.ylabel('Novel UTRons') pylab.xlim(0,60) pylab.savefig("./images/5_SaturationCurve", dpi=300) ###Output _____no_output_____ ###Markdown for transcripts...The graph seems to almost be levelling off at n=60 (although still on a slight upwards trajectory)for Genes...Graph seems to level off at n=20 to n=30 samples(i.e.) Seem to be picking up additional transcripts but in the same number of genes]- FOund all the possible genes with utrons in? ###Code #################################### # LIST OF GENES WITH UTRONS IN THEM ##################################### cnx = sqlite3.connect("/shared/sudlab1/General/projects/utrons_project/Simulations/Saturation/55.2.db") cnx.execute("ATTACH '/shared/sudlab1/General/annotations/hg38_noalt_ensembl85/csvdb' as annotations") query_text1 = ''' SELECT * FROM novel_utrons_ids AS uid INNER JOIN annotations.transcript_info AS ti ON ti.transcript_id = match_transcript_id WHERE track='agg-agg-agg' AND uid.transcript_id like "MSTRG%" GROUP BY gene_name ORDER BY transcript_id ''' query1 = pd.read_sql_query(query_text1, cnx) a = query1["gene_name"].tolist() outfile = open("/shared/sudlab1/General/projects/utrons_project/misc_files/systematicUtronGenes.txt", 'w') for line in sorted(a): line = line + "\n" outfile.write(line) outfile.close() ###Output _____no_output_____
notebooks/17-Parallel Processing/01-Multithreading and Multiprocessing.ipynb
###Markdown Multithreading and MultiprocessingRecall the phrase "many hands make light work". This is as true in programming as anywhere else.What if you could engineer your Python program to do four things at once? What would normally take an hour could (almost) take one fourth the time.\*This is the idea behind parallel processing, or the ability to set up and run multiple tasks concurrently.\* *We say almost, because you do have to take time setting up four processors, and it may take time to pass information between them.* Threading vs. ProcessingA good illustration of threading vs. processing would be to download an image file and turn it into a thumbnail.The first part, communicating with an outside source to download a file, involves a thread. Once the file is obtained, the work of converting it involves a process. Essentially, two factors determine how long this will take; the input/output speed of the network communication, or I/O, and the available processor, or CPU. I/O-intensive processes improved with multithreading:* webscraping* reading and writing to files* sharing data between programs* network communications CPU-intensive processes improved with multiprocessing:* computations* text formatting* image rescaling* data analysis Multithreading Example: WebscrapingHistorically, the programming knowledge required to set up multithreading was beyond the scope of this course, as it involved a good understanding of Python's Global Interpreter Lock (the GIL prevents multiple threads from running the same Python code at once). Also, you had to set up special classes that behave like Producers to divvy up the work, Consumers (aka "workers") to perform the work, and a Queue to hold tasks and provide communcations. And that was just the beginning.Fortunately, we've already learned one of the most valuable tools we'll need – the `map()` function. When we apply it using two standard libraries, *multiprocessing* and *multiprocessing.dummy*, setting up parallel processes and threads becomes fairly straightforward. Here's a classic multithreading example provided by [IBM](http://www.ibm.com/developerworks/aix/library/au-threadingpython/) and adapted by [Chris Kiehl](http://chriskiehl.com/article/parallelism-in-one-line/) where you divide the task of retrieving web pages across multiple threads: import time import threading import Queue import urllib2 class Consumer(threading.Thread): def __init__(self, queue): threading.Thread.__init__(self) self._queue = queue def run(self): while True: content = self._queue.get() if isinstance(content, str) and content == 'quit': break response = urllib2.urlopen(content) print 'Thanks!' def Producer(): urls = [ 'http://www.python.org', 'http://www.yahoo.com' 'http://www.scala.org', 'http://www.google.com' etc.. ] queue = Queue.Queue() worker_threads = build_worker_pool(queue, 4) start_time = time.time() Add the urls to process for url in urls: queue.put(url) Add the poison pill for worker in worker_threads: queue.put('quit') for worker in worker_threads: worker.join() print 'Done! Time taken: {}'.format(time.time() - start_time) def build_worker_pool(queue, size): workers = [] for _ in range(size): worker = Consumer(queue) worker.start() workers.append(worker) return workers if __name__ == '__main__': Producer() Using the multithreading library provided by the *multiprocessing.dummy* module and `map()` all of this becomes: import urllib2 from multiprocessing.dummy import Pool as ThreadPool pool = ThreadPool(4) choose a number of workers urls = [ 'http://www.python.org', 'http://www.yahoo.com' 'http://www.scala.org', 'http://www.google.com' etc.. ] results = pool.map(urllib2.urlopen, urls) pool.close() pool.join() In the above code, the *multiprocessing.dummy* module provides the parallel threads, and `map(urllib2.urlopen, urls)` assigns the labor! Multiprocessing Example: Monte CarloLet's code out an example to see how the parts fit together. We can time our results using the *timeit* module to measure any performance gains. Our task is to apply the Monte Carlo Method to estimate the value of Pi. Monte Carle Method and Estimating PiIf you draw a circle of radius 1 (a unit circle) and enclose it in a square, the areas of the two shapes are given as Area Formulas circle$$πr^2$$ square$$4 r^2$$Therefore, the ratio of the volume of the circle to the volume of the square is $$\frac{π}{4}$$The Monte Carlo Method plots a series of random points inside the square. By comparing the number that fall within the circle to those that fall outside, with a large enough sample we should have a good approximation of Pi. You can see a good demonstration of this [here](https://academo.org/demos/estimating-pi-monte-carlo/) (Hit the **Animate** button on the page).For a given number of points *n*, we have $$π = \frac{4 \cdot points\ inside\ circle}{total\ points\ n}$$To set up our multiprocessing program, we first derive a function for finding Pi that we can pass to `map()`: ###Code from random import random # perform this import outside the function def find_pi(n): """ Function to estimate the value of Pi """ inside=0 for i in range(0,n): x=random() y=random() if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle inside+=1 pi=4*inside/n return pi ###Output _____no_output_____ ###Markdown Let's test `find_pi` on 5,000 points: ###Code find_pi(5000) ###Output _____no_output_____ ###Markdown This ran very quickly, but the results are not very accurate!Next we'll write a script that sets up a pool of workers, and lets us time the results against varying sized pools. We'll set up two arguments to represent *processes* and *total_iterations*. Inside the script, we'll break *total_iterations* down into the number of iterations passed to each process, by making a processes-sized list.For example: total_iterations = 1000 processes = 5 iterations = [total_iterations//processes]*processes iterations Output: [200, 200, 200, 200, 200] This list will be passed to our `map()` function along with `find_pi()` ###Code %%writefile test.py from random import random from multiprocessing import Pool import timeit def find_pi(n): """ Function to estimate the value of Pi """ inside=0 for i in range(0,n): x=random() y=random() if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle inside+=1 pi=4*inside/n return pi if __name__ == '__main__': N = 10**5 # total iterations P = 5 # number of processes p = Pool(P) print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10)) p.close() p.join() print(f'{N} total iterations with {P} processes') ! python test.py ###Output 3.1466800 3.1364400 3.1470400 3.1370400 3.1256400 3.1398400 3.1395200 3.1363600 3.1437200 3.1334400 0.2370227286270967 100000 total iterations with 5 processes ###Markdown Great! The above test took under a second on our computer.Now that we know our script works, let's increase the number of iterations, and compare two different pools. Sit back, this may take awhile! ###Code %%writefile test.py from random import random from multiprocessing import Pool import timeit def find_pi(n): """ Function to estimate the value of Pi """ inside=0 for i in range(0,n): x=random() y=random() if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle inside+=1 pi=4*inside/n return pi if __name__ == '__main__': N = 10**7 # total iterations P = 1 # number of processes p = Pool(P) print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10)) p.close() p.join() print(f'{N} total iterations with {P} processes') P = 5 # number of processes p = Pool(P) print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10)) p.close() p.join() print(f'{N} total iterations with {P} processes') ! python test.py ###Output 3.1420964 3.1417412 3.1411108 3.1408184 3.1414204 3.1417656 3.1408324 3.1418828 3.1420492 3.1412804 36.03526345242264 10000000 total iterations with 1 processes 3.1424524 3.1418376 3.1415292 3.1410344 3.1422376 3.1418736 3.1420540 3.1411452 3.1421652 3.1410672 17.300921846344366 10000000 total iterations with 5 processes ###Markdown Hopefully you saw that with 5 processes our script ran faster! More is Better ...to a point.The gain in speed as you add more parallel processes tends to flatten out at some point. In any collection of tasks, there are going to be one or two that take longer than average, and no amount of added processing can speed them up. This is best described in [Amdahl's Law](https://en.wikipedia.org/wiki/Amdahl%27s_law). Advanced ScriptIn the example below, we'll add a context manager to shrink these three lines p = Pool(P) ... p.close() p.join() to one line: with Pool(P) as p: And we'll accept command line arguments using the *sys* module. ###Code %%writefile test2.py from random import random from multiprocessing import Pool import timeit import sys N = int(sys.argv[1]) # these arguments are passed in from the command line P = int(sys.argv[2]) def find_pi(n): """ Function to estimate the value of Pi """ inside=0 for i in range(0,n): x=random() y=random() if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle inside+=1 pi=4*inside/n return pi if __name__ == '__main__': with Pool(P) as p: print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.5f}'), number=10)) print(f'{N} total iterations with {P} processes') ! python test2.py 10000000 500 ###Output 3.14121 3.14145 3.14178 3.14194 3.14109 3.14201 3.14243 3.14150 3.14203 3.14116 16.871822701405073 10000000 total iterations with 500 processes
docs/source/dowhy_confounder_example.ipynb
###Markdown Confounding Example: Finding causal effects from observed dataSuppose you are given some data with treatment and outcome. Can you determine whether the treatment causes the outcome, or the correlation is purely due to another common cause? ###Code import os, sys sys.path.append(os.path.abspath("../../")) import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import math import dowhy from dowhy.do_why import CausalModel import dowhy.datasets, dowhy.plotter ###Output _____no_output_____ ###Markdown Let's create a mystery dataset for which we need to determine whether there is a causal effect.Creating the dataset. It is generated from either one of two models:* **Model 1**: Treatment does cause outcome. * **Model 2**: Treatment does not cause outcome. All observed correlation is due to a common cause. ###Code rvar = 1 if np.random.uniform() >0.5 else 0 data_dict = dowhy.datasets.xy_dataset(10000, effect=rvar, sd_error=0.2) df = data_dict['df'] print(df[["Treatment", "Outcome", "w0"]].head()) dowhy.plotter.plot_treatment_outcome(df[data_dict["treatment_name"]], df[data_dict["outcome_name"]], df[data_dict["time_val"]]) ###Output _____no_output_____ ###Markdown Using DoWhy to resolve the mystery: *Does Treatment cause Outcome?* STEP 1: Model the problem as a causal graphInitializing the causal model. ###Code model= CausalModel( data=df, treatment=data_dict["treatment_name"], outcome=data_dict["outcome_name"], common_causes=data_dict["common_causes_names"], instruments=data_dict["instrument_names"]) model.view_model(layout="dot") ###Output WARNING:dowhy.do_why:Causal Graph not provided. DoWhy will construct a graph based on data inputs. INFO:dowhy.do_why:Model to find the causal effect of treatment ['Treatment'] on outcome ['Outcome'] ###Markdown Showing the causal model stored in the local file "causal_model.png" ###Code from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown STEP 2: Identify causal effect using properties of the formal causal graphIdentify the causal effect using properties of the causal graph. ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['w0', 'U'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. ###Markdown STEP 3: Estimate the causal effectOnce we have identified the estimand, we can use any statistical method to estimate the causal effect. Let's use Linear Regression for simplicity. ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.linear_regression") print("Causal Estimate is " + str(estimate.value)) # Plot Slope of line between treamtent and outcome =causal effect dowhy.plotter.plot_causal_effect(estimate, df[data_dict["treatment_name"]], df[data_dict["outcome_name"]]) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0 ###Markdown Checking if the estimate is correct ###Code print("DoWhy estimate is " + str(estimate.value)) print ("Actual true causal effect was {0}".format(rvar)) ###Output DoWhy estimate is 0.0051827863049 Actual true causal effect was 0 ###Markdown Step 4: Refuting the estimateWe can also refute the estimate to check its robustness to assumptions (*aka* sensitivity analysis, but on steroids). Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0+w_random ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~placebo+w0 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0 ###Markdown Confounding Example: Finding causal effects from observed dataSuppose you are given some data with treatment and outcome. Can you determine whether the treatment causes the outcome, or the correlation is purely due to another common cause? ###Code import os, sys sys.path.append(os.path.abspath("../../")) import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import math import dowhy from dowhy.do_why import CausalModel import dowhy.datasets, dowhy.plotter ###Output _____no_output_____ ###Markdown Let's create a mystery dataset for which we need to determine whether there is a causal effect.Creating the dataset. It is generated from either one of two models:* **Model 1**: Treatment does cause outcome. * **Model 2**: Treatment does not cause outcome. All observed correlation is due to a common cause. ###Code rvar = 1 if np.random.uniform() >0.5 else 0 data_dict = dowhy.datasets.xy_dataset(10000, effect=rvar, sd_error=0.2) df = data_dict['df'] print(df[["Treatment", "Outcome", "w0"]].head()) dowhy.plotter.plot_treatment_outcome(df[data_dict["treatment_name"]], df[data_dict["outcome_name"]], df[data_dict["time_val"]]) ###Output _____no_output_____ ###Markdown Using DoWhy to resolve the mystery: *Does Treatment cause Outcome?* STEP 1: Model the problem as a causal graphInitializing the causal model. ###Code model= CausalModel( data=df, treatment=data_dict["treatment_name"], outcome=data_dict["outcome_name"], common_causes=data_dict["common_causes_names"], instruments=data_dict["instrument_names"]) model.view_model(layout="dot") ###Output WARNING:dowhy.do_why:Causal Graph not provided. DoWhy will construct a graph based on data inputs. ###Markdown Showing the causal model stored in the local file "causal_model.png" ###Code from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown STEP 2: Identify causal effect using properties of the formal causal graphIdentify the causal effect using properties of the causal graph. ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:{'w0', 'U'} ###Markdown STEP 3: Estimate the causal effectOnce we have identified the estimand, we can use any statistical method to estimate the causal effect. Let's use Linear Regression for simplicity. ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.linear_regression") print("Causal Estimate is " + str(estimate.value)) # Plot Slope of line between treamtent and outcome =causal effect dowhy.plotter.plot_causal_effect(estimate, df[data_dict["treatment_name"]], df[data_dict["outcome_name"]]) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0 ###Markdown Checking if the estimate is correct ###Code print("DoWhy estimate is " + str(estimate.value)) print ("Actual true causal effect was {0}".format(rvar)) ###Output DoWhy estimate is 0.0180444904797 Actual true causal effect was 0 ###Markdown Step 4: Refuting the estimateWe can also refute the estimate to check its robustness to assumptions (*aka* sensitivity analysis, but on steroids). Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0+w_random ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~placebo+w0 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0 ###Markdown Confounding Example: Finding causal effects from observed dataSuppose you are given some data with treatment and outcome. Can you determine whether the treatment causes the outcome, or the correlation is purely due to another common cause? ###Code import os, sys sys.path.append(os.path.abspath("../../")) import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import math import dowhy from dowhy.do_why import CausalModel import dowhy.datasets, dowhy.plotter ###Output _____no_output_____ ###Markdown Let's create a mystery dataset for which we need to determine whether there is a causal effect.Creating the dataset. It is generated from either one of two models:* **Model 1**: Treatment does cause outcome. * **Model 2**: Treatment does not cause outcome. All observed correlation is due to a common cause. ###Code rvar = 1 if np.random.uniform() >0.5 else 0 data_dict = dowhy.datasets.xy_dataset(10000, effect=rvar, sd_error=0.2) df = data_dict['df'] print(df[["Treatment", "Outcome", "w0"]].head()) dowhy.plotter.plot_treatment_outcome(df[data_dict["treatment_name"]], df[data_dict["outcome_name"]], df[data_dict["time_val"]]) ###Output _____no_output_____ ###Markdown Using DoWhy to resolve the mystery: *Does Treatment cause Outcome?* STEP 1: Model the problem as a causal graphInitializing the causal model. ###Code model= CausalModel( data=df, treatment=data_dict["treatment_name"], outcome=data_dict["outcome_name"], common_causes=data_dict["common_causes_names"], instruments=data_dict["instrument_names"]) model.view_model(layout="dot") ###Output WARNING:dowhy.do_why:Causal Graph not provided. DoWhy will construct a graph based on data inputs. INFO:dowhy.do_why:Model to find the causal effect of treatment ['Treatment'] on outcome ['Outcome'] ###Markdown Showing the causal model stored in the local file "causal_model.png" ###Code from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown STEP 2: Identify causal effect using properties of the formal causal graphIdentify the causal effect using properties of the causal graph. ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['w0', 'U'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. ###Markdown STEP 3: Estimate the causal effectOnce we have identified the estimand, we can use any statistical method to estimate the causal effect. Let's use Linear Regression for simplicity. ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.linear_regression") print("Causal Estimate is " + str(estimate.value)) # Plot Slope of line between treamtent and outcome =causal effect dowhy.plotter.plot_causal_effect(estimate, df[data_dict["treatment_name"]], df[data_dict["outcome_name"]]) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0 ###Markdown Checking if the estimate is correct ###Code print("DoWhy estimate is " + str(estimate.value)) print ("Actual true causal effect was {0}".format(rvar)) ###Output DoWhy estimate is 0.0051827863049 Actual true causal effect was 0 ###Markdown Step 4: Refuting the estimateWe can also refute the estimate to check its robustness to assumptions (*aka* sensitivity analysis, but on steroids). Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0+w_random ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~placebo+w0 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator INFO:dowhy.causal_estimator:b: Outcome~Treatment+w0
_notebooks/2021-07-07-recsys-evaluation-metrics-part-2.ipynb
###Markdown Recommender System Evaluations - Part 2> Understanding evaluation metrics and pricing factors- toc: true- badges: true- comments: true- categories: [Evaluation]- image: ###Code import numpy as np import pandas as pd import math ###Output _____no_output_____ ###Markdown HR@K ###Code def hit_rate_at_k(recommended_list, bought_list, k=5): bought_list = np.array(bought_list) recommended_list = np.array(recommended_list)[:k] flags = np.isin(bought_list, recommended_list) return (flags.sum() > 0) * 1 recommended_list = [156, 1134, 27, 1543, 3345, 143, 32, 533, 11, 43] #items ids bought_list = [521, 32, 143, 991] hit_rate_at_k(recommended_list, bought_list, 5) hit_rate_at_k(recommended_list, bought_list, 10) ###Output _____no_output_____ ###Markdown Precision@K - Precision = ( of recommended items that are relevant) / ( of recommended items)- Precision @ k = ( of recommended items @k that are relevant) / ( of recommended items @k)- Money Precision @ k = (revenue of recommended items @k that are relevant) / (revenue of recommended items @k) ###Code def precision_at_k(recommended_list, bought_list, k=5): bought_list = np.array(bought_list) recommended_list = np.array(recommended_list)[:k] flags = np.isin(bought_list, recommended_list) return flags.sum() / len(recommended_list) def money_precision_at_k(recommended_list, bought_list, prices_recommended, k=5): recommend_list = np.array(recommended_list)[:k] prices_recommended = np.array(prices_recommended)[:k] flags = np.isin(recommend_list, bought_list) precision = np.dot(flags, prices_recommended) / prices_recommended.sum() return precision recommended_list = [156, 1134, 27, 1543, 3345, 143, 32, 533, 11, 43] #items ids bought_list = [521, 32, 143, 991] prices_recommendede_list = [400, 60, 40, 90, 60, 340, 70, 190,110, 240] precision_at_k(recommended_list, bought_list, 5) precision_at_k(recommended_list, bought_list, 10) money_precision_at_k(recommended_list, bought_list, prices_recommendede_list, 5) money_precision_at_k(recommended_list, bought_list, prices_recommendede_list, 10) ###Output _____no_output_____ ###Markdown Recall@K - Recall = ( of recommended items that are relevant) / ( of relevant items)- Recall @ k = ( of recommended items @k that are relevant) / ( of relevant items)- Money Recall @ k = (revenue of recommended items @k that are relevant) / (revenue of relevant items) ###Code recommended_list=[143,156,1134,991,27,1543,3345,533,11,43] #itemsid prices_recommended_list=[400,60,40,90,60,340,70,190,110,240] bought_list=[521,32,143,991] prices_bought=[150,30,400,90] def recall_at_k(recommended_list, bought_list, k=5): bought_list = np.array(bought_list) recommended_list = np.array(recommended_list)[:k] flags = np.isin(bought_list, recommended_list) return flags.sum() / len(bought_list) def money_recall_at_k(recommended_list, bought_list, prices_recommended, prices_bought, k=5): bought_list = np.array(bought_list) prices_bought = np.array(prices_bought) recommended_list = np.array(recommended_list)[:k] prices_recommended = np.array(prices_recommended)[:k] flags = np.isin(recommended_list, bought_list) return np.dot(flags, prices_recommended)/prices_bought.sum() recall_at_k(recommended_list, bought_list, 5) recall_at_k(recommended_list, bought_list, 10) money_recall_at_k(recommended_list, bought_list, prices_recommendede_list, 5) money_recall_at_k(recommended_list, bought_list, prices_recommendede_list, 10) ###Output _____no_output_____ ###Markdown MAP@K- MAP @ k (Mean Average Precision @ k )- Average AP @ k for all users ###Code def ap_k(recommended_list, bought_list, k=5): bought_list = np.array(bought_list) recommended_list = np.array(recommended_list)[:k] relevant_indexes = np.nonzero(np.isin(recommended_list, bought_list))[0] if len(relevant_indexes) == 0: return 0 amount_relevant = len(relevant_indexes) sum_ = sum([precision_at_k(recommended_list, bought_list, k=index_relevant+1) for index_relevant in relevant_indexes]) return sum_/amount_relevant def map_k(recommended_list, bought_list, k=5): amount_user = len(bought_list) list_ap_k = [ap_k(recommended_list[i], bought_list[i], k) for i in np.arange(amount_user)] sum_ap_k = sum(list_ap_k) return sum_ap_k/amount_user #list of 3 users recommended_list_3_users = [[143,156,1134,991,27,1543,3345,533,11,43], [1134,533,14,4,15,1543,1,99,27,3345], [991,3345,27,533,43,143,1543,156,1134,11]] bought_list_3_users= [[521,32,143], #user1 [143,156,991,43,11], #user2 [1,2]] #user3 map_k(recommended_list_3_users, bought_list_3_users, 5) ###Output _____no_output_____ ###Markdown MRR@K ###Code def reciprocal_rank(recommended_list, bought_list, k=1): recommended_list = np.array(recommended_list) bought_list = np.array(bought_list) amount_user = len(bought_list) rr = [] for i in np.arange(amount_user): relevant_indexes = np.nonzero(np.isin(recommended_list[i][:k], bought_list[i]))[0] if len(relevant_indexes) != 0: rr.append(1/(relevant_indexes[0]+1)) if len(rr) == 0: return 0 return sum(rr)/amount_user reciprocal_rank(recommended_list_3_users, bought_list_3_users, 5) ###Output /usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:3: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray This is separate from the ipykernel package so we can avoid doing imports until ###Markdown NDCG@K ###Code def ndcg_at_k(recommended_list, bought_list, k=5): rec = recommended_list b = bought_list recommended_list = np.array(recommended_list)[:k] bought_list = np.array(bought_list) flags = np.isin(recommended_list, bought_list) rank_list = [] for i in np.arange(len(recommended_list)): if i < 2: rank_list.append(i+1) else: rank_list.append(math.log2(i+1)) if len(recommended_list) == 0: return 0 dcg = sum(np.divide(flags, rank_list)) / len(recommended_list) i_dcg = sum(np.divide(1, rank_list)) / len(recommended_list) # print(i_dcg) return dcg/i_dcg recommended_list = [143,156,1134,991,27,1543,3345,533,11,43] #iditems prices_recommended_list = [400,60,40,90,60,340,70,190,110,240] bought_list = [521,32,143,991] prices_bought = [150,30,400,90] ndcg_at_k(recommended_list, bought_list, 5) ###Output _____no_output_____
Movie Sentiment Analysis + LSTM.ipynb
###Markdown LSTM Model ###Code embedding_vector_length = 32 model = Sequential() # Uses 32 vectors to represent each word model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length)) # LSTM with 100 memory unit model.add(LSTM(100)) # It's a binary classification issue. Use single neuron to output either 0 or 1 model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=3, batch_size=64) ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_4 (Embedding) (None, 500, 32) 160000 _________________________________________________________________ lstm_2 (LSTM) (None, 100) 53200 _________________________________________________________________ dense_2 (Dense) (None, 1) 101 ================================================================= Total params: 213,301 Trainable params: 213,301 Non-trainable params: 0 _________________________________________________________________ None Train on 25000 samples, validate on 25000 samples Epoch 1/3 25000/25000 [==============================] - 223s 9ms/step - loss: 0.4933 - acc: 0.7645 - val_loss: 0.3598 - val_acc: 0.8483 Epoch 2/3 25000/25000 [==============================] - 221s 9ms/step - loss: 0.3133 - acc: 0.8735 - val_loss: 0.3309 - val_acc: 0.8627 Epoch 3/3 25000/25000 [==============================] - 217s 9ms/step - loss: 0.2630 - acc: 0.8974 - val_loss: 0.3048 - val_acc: 0.8759 ###Markdown Model evaluation ###Code print(X_test) scores = model.evaluate(X_test, y_test, verbose=1) print('Accuracy: {}'.format(scores[1] * 100)) text = ''' I've enjoyed previous Thor movies and after seeing the rating here i expected this to be a decent movie, it wasn't. I guess this is the trend to make money on movies now days, just have big stars, bad jokes and lot of pointless action and effects. It's just so sad if you think about the potential of how good these movies could be. Maybe this was the last Marvel movie I bother to watch. ''' x = keras.preprocessing.text.one_hot(text, top_words, lower=True, split=' ') x = [x] x = sequence.pad_sequences(x, max_review_length) predictions = model.predict_classes(x) sentiment = predictions[0][0] print(predictions) if sentiment == 1: print('Someone likes the movie: ', text) else: print('Someone DOESNT like the') ###Output [[1]] ('Someone likes the movie: ', "\nI've enjoyed previous Thor movies and after seeing the rating here i expected this to be a decent movie, it wasn't.\n\nI guess this is the trend to make money on movies now days, just have big stars, bad jokes and lot of pointless action and effects. It's just so sad if you think about the potential of how good these movies could be.\n\nMaybe this was the last Marvel movie I bother to watch.\n") ###Markdown Prevent overfitting ###Code # Construct a new model with dropouts model = Sequential() model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length)) model.add(Dropout(0.2)) model.add(LSTM(100)) model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=3, batch_size=64) ###Output Train on 25000 samples, validate on 25000 samples Epoch 1/3 25000/25000 [==============================] - 230s 9ms/step - loss: 0.4922 - acc: 0.7548 - val_loss: 0.3839 - val_acc: 0.8376 Epoch 2/3 25000/25000 [==============================] - 231s 9ms/step - loss: 0.3165 - acc: 0.8730 - val_loss: 0.4347 - val_acc: 0.7939 Epoch 3/3 25000/25000 [==============================] - 239s 10ms/step - loss: 0.4730 - acc: 0.7889 - val_loss: 0.4109 - val_acc: 0.8239 ###Markdown Evaluate the new LSTM model with dropouts ###Code scores = model.evaluate(X_test, y_test, verbose=1) print('Accuracy: {}'.format(scores[1] * 100)) x = ''' I've enjoyed previous Thor movies and after seeing the rating here i expected this to be a decent movie, it wasn't. I guess this is the trend to make money on movies now days, just have big stars, bad jokes and lot of pointless action and effects. It's just so sad if you think about the potential of how good these movies could be. Maybe this was the last Marvel movie I bother to watch. ''' x = 'I love this movie!' text = keras.preprocessing.text.one_hot(x, top_words, lower=True, split=' ') text = [text] text = sequence.pad_sequences(text, max_review_length) predictions = model.predict_classes(text) sentiment = predictions[0][0] print(predictions) if sentiment == 1: print('Someone likes the movie: ', x) else: print('Someone DOESNT like the movie ', x) ###Output _____no_output_____
ssd_keras/ssd300_training.ipynb
###Markdown SSD300 Training TutorialThis tutorial explains how to train an SSD300 on the Pascal VOC datasets. The preset parameters reproduce the training of the original SSD300 "07+12" model. Training SSD512 works simiarly, so there's no extra tutorial for that. The same goes for training on other datasets.You can find a summary of a full training here to get an impression of what it should look like:[SSD300 "07+12" training summary](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md) ###Code import os os.environ['CUDA_VISIBLE_DEVICES'] = str(0) from keras.optimizers import Adam, SGD from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger from keras import backend as K from keras.models import load_model from math import ceil import numpy as np from matplotlib import pyplot as plt from models.keras_ssd300 import ssd_300 from keras_loss_function.keras_ssd_loss import SSDLoss from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes from keras_layers.keras_layer_DecodeDetections import DecodeDetections from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast from keras_layers.keras_layer_L2Normalization import L2Normalization from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast from data_generator.object_detection_2d_data_generator import DataGenerator from data_generator.object_detection_2d_geometric_ops import Resize from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms %matplotlib inline ###Output Using TensorFlow backend. ###Markdown 0. Preliminary noteAll places in the code where you need to make any changes are marked `TODO` and explained accordingly. All code cells that don't contain `TODO` markers just need to be executed. 1. Set the model configuration parametersThis section sets the configuration parameters for the model definition. The parameters set here are being used both by the `ssd_300()` function that builds the SSD300 model as well as further down by the constructor for the `SSDInputEncoder` object that is needed to run the training. Most of these parameters are needed to define the anchor boxes.The parameters as set below produce the original SSD300 architecture that was trained on the Pascal VOC datsets, i.e. they are all chosen to correspond exactly to their respective counterparts in the `.prototxt` file that defines the original Caffe implementation. Note that the anchor box scaling factors of the original SSD implementation vary depending on the datasets on which the models were trained. The scaling factors used for the MS COCO datasets are smaller than the scaling factors used for the Pascal VOC datasets. The reason why the list of scaling factors has 7 elements while there are only 6 predictor layers is that the last scaling factor is used for the second aspect-ratio-1 box of the last predictor layer. Refer to the documentation for details.As mentioned above, the parameters set below are not only needed to build the model, but are also passed to the `SSDInputEncoder` constructor further down, which is responsible for matching and encoding ground truth boxes and anchor boxes during the training. In order to do that, it needs to know the anchor box parameters. ###Code img_height = 960 # Height of the model input images img_width = 720 # Width of the model input images img_channels = 3 # Number of color channels of the model input images mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights. swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images. n_classes = 4 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets scales = scales_pascal aspect_ratios = [[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters two_boxes_for_ar1 = True steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer. offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer. clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation normalize_coords = True ###Output _____no_output_____ ###Markdown 2. Build or load the modelYou will want to execute either of the two code cells in the subsequent two sub-sections, not both. 2.1 Create a new model and load trained VGG-16 weights into it (or trained SSD weights)If you want to create a new SSD300 model, this is the relevant section for you. If you want to load a previously saved SSD300 model, skip ahead to section 2.2.The code cell below does the following things:1. It calls the function `ssd_300()` to build the model.2. It then loads the weights file that is found at `weights_path` into the model. You could load the trained VGG-16 weights or you could load the weights of a trained model. If you want to reproduce the original SSD training, load the pre-trained VGG-16 weights. In any case, you need to set the path to the weights file you want to load on your local machine. Download links to all the trained weights are provided in the [README](https://github.com/pierluigiferrari/ssd_keras/blob/master/README.md) of this repository.3. Finally, it compiles the model for the training. In order to do so, we're defining an optimizer (Adam) and a loss function (SSDLoss) to be passed to the `compile()` method.Normally, the optimizer of choice would be Adam (commented out below), but since the original implementation uses plain SGD with momentum, we'll do the same in order to reproduce the original training. Adam is generally the superior optimizer, so if your goal is not to have everything exactly as in the original training, feel free to switch to Adam. You might need to adjust the learning rate scheduler below slightly in case you use Adam.Note that the learning rate that is being set here doesn't matter, because further below we'll pass a learning rate scheduler to the training function, which will overwrite any learning rate set here, i.e. what matters are the learning rates that are defined by the learning rate scheduler.`SSDLoss` is a custom Keras loss function that implements the multi-task that consists of a log loss for classification and a smooth L1 loss for localization. `neg_pos_ratio` and `alpha` are set as in the paper. ###Code # 1: Build the Keras model. K.clear_session() # Clear previous models from memory. model = ssd_300(image_size=(img_height, img_width, img_channels), n_classes=n_classes, mode='training', l2_regularization=0.0005, scales=scales, aspect_ratios_per_layer=aspect_ratios, two_boxes_for_ar1=two_boxes_for_ar1, steps=steps, offsets=offsets, clip_boxes=clip_boxes, variances=variances, normalize_coords=normalize_coords, subtract_mean=mean_color, swap_channels=swap_channels) # 2: Load some weights into the model. # TODO: Set the path to the weights you want to load. weights_path = 'VGG_ILSVRC_16_layers_fc_reduced.h5' model.load_weights(weights_path, by_name=True) # 3: Instantiate an optimizer and the SSD loss function and compile the model. # If you want to follow the original Caffe implementation, use the preset SGD # optimizer, otherwise I'd recommend the commented-out Adam optimizer. adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) #sgd = SGD(lr=0.001, momentum=0.9, decay=0.0, nesterov=False) ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) model.compile(optimizer=adam, loss=ssd_loss.compute_loss) print(model.summary()) ###Output __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) (None, 960, 720, 3) 0 __________________________________________________________________________________________________ identity_layer (Lambda) (None, 960, 720, 3) 0 input_1[0][0] __________________________________________________________________________________________________ input_mean_normalization (Lambd (None, 960, 720, 3) 0 identity_layer[0][0] __________________________________________________________________________________________________ input_channel_swap (Lambda) (None, 960, 720, 3) 0 input_mean_normalization[0][0] __________________________________________________________________________________________________ conv1_1 (Conv2D) (None, 960, 720, 64) 1792 input_channel_swap[0][0] __________________________________________________________________________________________________ conv1_2 (Conv2D) (None, 960, 720, 64) 36928 conv1_1[0][0] __________________________________________________________________________________________________ pool1 (MaxPooling2D) (None, 480, 360, 64) 0 conv1_2[0][0] __________________________________________________________________________________________________ conv2_1 (Conv2D) (None, 480, 360, 128 73856 pool1[0][0] __________________________________________________________________________________________________ conv2_2 (Conv2D) (None, 480, 360, 128 147584 conv2_1[0][0] __________________________________________________________________________________________________ pool2 (MaxPooling2D) (None, 240, 180, 128 0 conv2_2[0][0] __________________________________________________________________________________________________ conv3_1 (Conv2D) (None, 240, 180, 256 295168 pool2[0][0] __________________________________________________________________________________________________ conv3_2 (Conv2D) (None, 240, 180, 256 590080 conv3_1[0][0] __________________________________________________________________________________________________ conv3_3 (Conv2D) (None, 240, 180, 256 590080 conv3_2[0][0] __________________________________________________________________________________________________ pool3 (MaxPooling2D) (None, 120, 90, 256) 0 conv3_3[0][0] __________________________________________________________________________________________________ conv4_1 (Conv2D) (None, 120, 90, 512) 1180160 pool3[0][0] __________________________________________________________________________________________________ conv4_2 (Conv2D) (None, 120, 90, 512) 2359808 conv4_1[0][0] __________________________________________________________________________________________________ conv4_3 (Conv2D) (None, 120, 90, 512) 2359808 conv4_2[0][0] __________________________________________________________________________________________________ pool4 (MaxPooling2D) (None, 60, 45, 512) 0 conv4_3[0][0] __________________________________________________________________________________________________ conv5_1 (Conv2D) (None, 60, 45, 512) 2359808 pool4[0][0] __________________________________________________________________________________________________ conv5_2 (Conv2D) (None, 60, 45, 512) 2359808 conv5_1[0][0] __________________________________________________________________________________________________ conv5_3 (Conv2D) (None, 60, 45, 512) 2359808 conv5_2[0][0] __________________________________________________________________________________________________ pool5 (MaxPooling2D) (None, 60, 45, 512) 0 conv5_3[0][0] __________________________________________________________________________________________________ fc6 (Conv2D) (None, 60, 45, 1024) 4719616 pool5[0][0] __________________________________________________________________________________________________ fc7 (Conv2D) (None, 60, 45, 1024) 1049600 fc6[0][0] __________________________________________________________________________________________________ conv6_1 (Conv2D) (None, 60, 45, 256) 262400 fc7[0][0] __________________________________________________________________________________________________ conv6_padding (ZeroPadding2D) (None, 62, 47, 256) 0 conv6_1[0][0] __________________________________________________________________________________________________ conv6_2 (Conv2D) (None, 30, 23, 512) 1180160 conv6_padding[0][0] __________________________________________________________________________________________________ conv7_1 (Conv2D) (None, 30, 23, 128) 65664 conv6_2[0][0] __________________________________________________________________________________________________ conv7_padding (ZeroPadding2D) (None, 32, 25, 128) 0 conv7_1[0][0] __________________________________________________________________________________________________ conv7_2 (Conv2D) (None, 15, 12, 256) 295168 conv7_padding[0][0] __________________________________________________________________________________________________ conv8_1 (Conv2D) (None, 15, 12, 128) 32896 conv7_2[0][0] __________________________________________________________________________________________________ conv8_2 (Conv2D) (None, 13, 10, 256) 295168 conv8_1[0][0] __________________________________________________________________________________________________ conv9_1 (Conv2D) (None, 13, 10, 128) 32896 conv8_2[0][0] __________________________________________________________________________________________________ conv4_3_norm (L2Normalization) (None, 120, 90, 512) 512 conv4_3[0][0] __________________________________________________________________________________________________ conv9_2 (Conv2D) (None, 11, 8, 256) 295168 conv9_1[0][0] __________________________________________________________________________________________________ conv4_3_norm_mbox_conf (Conv2D) (None, 120, 90, 20) 92180 conv4_3_norm[0][0] __________________________________________________________________________________________________ fc7_mbox_conf (Conv2D) (None, 60, 45, 30) 276510 fc7[0][0] __________________________________________________________________________________________________ conv6_2_mbox_conf (Conv2D) (None, 30, 23, 30) 138270 conv6_2[0][0] __________________________________________________________________________________________________ conv7_2_mbox_conf (Conv2D) (None, 15, 12, 30) 69150 conv7_2[0][0] __________________________________________________________________________________________________ conv8_2_mbox_conf (Conv2D) (None, 13, 10, 20) 46100 conv8_2[0][0] __________________________________________________________________________________________________ conv9_2_mbox_conf (Conv2D) (None, 11, 8, 20) 46100 conv9_2[0][0] __________________________________________________________________________________________________ conv4_3_norm_mbox_loc (Conv2D) (None, 120, 90, 16) 73744 conv4_3_norm[0][0] __________________________________________________________________________________________________ fc7_mbox_loc (Conv2D) (None, 60, 45, 24) 221208 fc7[0][0] __________________________________________________________________________________________________ conv6_2_mbox_loc (Conv2D) (None, 30, 23, 24) 110616 conv6_2[0][0] __________________________________________________________________________________________________ conv7_2_mbox_loc (Conv2D) (None, 15, 12, 24) 55320 conv7_2[0][0] __________________________________________________________________________________________________ conv8_2_mbox_loc (Conv2D) (None, 13, 10, 16) 36880 conv8_2[0][0] __________________________________________________________________________________________________ conv9_2_mbox_loc (Conv2D) (None, 11, 8, 16) 36880 conv9_2[0][0] __________________________________________________________________________________________________ conv4_3_norm_mbox_conf_reshape (None, 43200, 5) 0 conv4_3_norm_mbox_conf[0][0] __________________________________________________________________________________________________ fc7_mbox_conf_reshape (Reshape) (None, 16200, 5) 0 fc7_mbox_conf[0][0] __________________________________________________________________________________________________ conv6_2_mbox_conf_reshape (Resh (None, 4140, 5) 0 conv6_2_mbox_conf[0][0] __________________________________________________________________________________________________ conv7_2_mbox_conf_reshape (Resh (None, 1080, 5) 0 conv7_2_mbox_conf[0][0] __________________________________________________________________________________________________ conv8_2_mbox_conf_reshape (Resh (None, 520, 5) 0 conv8_2_mbox_conf[0][0] __________________________________________________________________________________________________ conv9_2_mbox_conf_reshape (Resh (None, 352, 5) 0 conv9_2_mbox_conf[0][0] __________________________________________________________________________________________________ conv4_3_norm_mbox_priorbox (Anc (None, 120, 90, 4, 8 0 conv4_3_norm_mbox_loc[0][0] __________________________________________________________________________________________________ fc7_mbox_priorbox (AnchorBoxes) (None, 60, 45, 6, 8) 0 fc7_mbox_loc[0][0] __________________________________________________________________________________________________ conv6_2_mbox_priorbox (AnchorBo (None, 30, 23, 6, 8) 0 conv6_2_mbox_loc[0][0] __________________________________________________________________________________________________ conv7_2_mbox_priorbox (AnchorBo (None, 15, 12, 6, 8) 0 conv7_2_mbox_loc[0][0] __________________________________________________________________________________________________ conv8_2_mbox_priorbox (AnchorBo (None, 13, 10, 4, 8) 0 conv8_2_mbox_loc[0][0] __________________________________________________________________________________________________ conv9_2_mbox_priorbox (AnchorBo (None, 11, 8, 4, 8) 0 conv9_2_mbox_loc[0][0] __________________________________________________________________________________________________ mbox_conf (Concatenate) (None, 65492, 5) 0 conv4_3_norm_mbox_conf_reshape[0] fc7_mbox_conf_reshape[0][0] conv6_2_mbox_conf_reshape[0][0] conv7_2_mbox_conf_reshape[0][0] conv8_2_mbox_conf_reshape[0][0] conv9_2_mbox_conf_reshape[0][0] __________________________________________________________________________________________________ conv4_3_norm_mbox_loc_reshape ( (None, 43200, 4) 0 conv4_3_norm_mbox_loc[0][0] __________________________________________________________________________________________________ fc7_mbox_loc_reshape (Reshape) (None, 16200, 4) 0 fc7_mbox_loc[0][0] __________________________________________________________________________________________________ conv6_2_mbox_loc_reshape (Resha (None, 4140, 4) 0 conv6_2_mbox_loc[0][0] __________________________________________________________________________________________________ conv7_2_mbox_loc_reshape (Resha (None, 1080, 4) 0 conv7_2_mbox_loc[0][0] __________________________________________________________________________________________________ conv8_2_mbox_loc_reshape (Resha (None, 520, 4) 0 conv8_2_mbox_loc[0][0] __________________________________________________________________________________________________ conv9_2_mbox_loc_reshape (Resha (None, 352, 4) 0 conv9_2_mbox_loc[0][0] __________________________________________________________________________________________________ conv4_3_norm_mbox_priorbox_resh (None, 43200, 8) 0 conv4_3_norm_mbox_priorbox[0][0] __________________________________________________________________________________________________ fc7_mbox_priorbox_reshape (Resh (None, 16200, 8) 0 fc7_mbox_priorbox[0][0] __________________________________________________________________________________________________ conv6_2_mbox_priorbox_reshape ( (None, 4140, 8) 0 conv6_2_mbox_priorbox[0][0] __________________________________________________________________________________________________ conv7_2_mbox_priorbox_reshape ( (None, 1080, 8) 0 conv7_2_mbox_priorbox[0][0] __________________________________________________________________________________________________ conv8_2_mbox_priorbox_reshape ( (None, 520, 8) 0 conv8_2_mbox_priorbox[0][0] __________________________________________________________________________________________________ conv9_2_mbox_priorbox_reshape ( (None, 352, 8) 0 conv9_2_mbox_priorbox[0][0] __________________________________________________________________________________________________ mbox_conf_softmax (Activation) (None, 65492, 5) 0 mbox_conf[0][0] __________________________________________________________________________________________________ mbox_loc (Concatenate) (None, 65492, 4) 0 conv4_3_norm_mbox_loc_reshape[0][ fc7_mbox_loc_reshape[0][0] conv6_2_mbox_loc_reshape[0][0] conv7_2_mbox_loc_reshape[0][0] conv8_2_mbox_loc_reshape[0][0] conv9_2_mbox_loc_reshape[0][0] __________________________________________________________________________________________________ mbox_priorbox (Concatenate) (None, 65492, 8) 0 conv4_3_norm_mbox_priorbox_reshap fc7_mbox_priorbox_reshape[0][0] conv6_2_mbox_priorbox_reshape[0][ conv7_2_mbox_priorbox_reshape[0][ conv8_2_mbox_priorbox_reshape[0][ conv9_2_mbox_priorbox_reshape[0][ __________________________________________________________________________________________________ predictions (Concatenate) (None, 65492, 17) 0 mbox_conf_softmax[0][0] mbox_loc[0][0] mbox_priorbox[0][0] ================================================================================================== Total params: 24,146,894 Trainable params: 24,146,894 Non-trainable params: 0 __________________________________________________________________________________________________ None ###Markdown 2.2 Load a previously created modelIf you have previously created and saved a model and would now like to load it, execute the next code cell. The only thing you need to do here is to set the path to the saved model HDF5 file that you would like to load.The SSD model contains custom objects: Neither the loss function nor the anchor box or L2-normalization layer types are contained in the Keras core library, so we need to provide them to the model loader.This next code cell assumes that you want to load a model that was created in 'training' mode. If you want to load a model that was created in 'inference' or 'inference_fast' mode, you'll have to add the `DecodeDetections` or `DecodeDetectionsFast` layer type to the `custom_objects` dictionary below. ###Code # TODO: Set the path to the `.h5` file of the model to be loaded. model_path = 'ssd300_pascal_07+12_epoch-20_loss-4.4403_val_loss-3.9984.h5' # We need to create an SSDLoss object in order to pass that to the model loader. ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) K.clear_session() # Clear previous models from memory. model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes, 'L2Normalization': L2Normalization, 'compute_loss': ssd_loss.compute_loss}) ###Output WARNING:tensorflow:From /caa/Homes01/mburges/anaconda3/envs/ssd/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead. WARNING:tensorflow:From /caa/Homes01/mburges/anaconda3/envs/ssd/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead. ###Markdown 3. Set up the data generators for the trainingThe code cells below set up the data generators for the training and validation datasets to train the model. The settings below reproduce the original SSD training on Pascal VOC 2007 `trainval` plus 2012 `trainval` and validation on Pascal VOC 2007 `test`.The only thing you need to change here are the filepaths to the datasets on your local machine. Note that parsing the labels from the XML annotations files can take a while.Note that the generator provides two options to speed up the training. By default, it loads the individual images for a batch from disk. This has two disadvantages. First, for compressed image formats like JPG, this is a huge computational waste, because every image needs to be decompressed again and again every time it is being loaded. Second, the images on disk are likely not stored in a contiguous block of memory, which may also slow down the loading process. The first option that `DataGenerator` provides to deal with this is to load the entire dataset into memory, which reduces the access time for any image to a negligible amount, but of course this is only an option if you have enough free memory to hold the whole dataset. As a second option, `DataGenerator` provides the possibility to convert the dataset into a single HDF5 file. This HDF5 file stores the images as uncompressed arrays in a contiguous block of memory, which dramatically speeds up the loading time. It's not as good as having the images in memory, but it's a lot better than the default option of loading them from their compressed JPG state every time they are needed. Of course such an HDF5 dataset may require significantly more disk space than the compressed images (around 9 GB total for Pascal VOC 2007 `trainval` plus 2012 `trainval` and another 2.6 GB for 2007 `test`). You can later load these HDF5 datasets directly in the constructor.The original SSD implementation uses a batch size of 32 for the training. In case you run into GPU memory issues, reduce the batch size accordingly. You need at least 7 GB of free GPU memory to train an SSD300 with 20 object classes with a batch size of 32.The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.The data augmentation settings defined further down reproduce the data augmentation pipeline of the original SSD training. The training generator receives an object `ssd_data_augmentation`, which is a transformation object that is itself composed of a whole chain of transformations that replicate the data augmentation procedure used to train the original Caffe implementation. The validation generator receives an object `resize`, which simply resizes the input images.An `SSDInputEncoder` object, `ssd_input_encoder`, is passed to both the training and validation generators. As explained above, it matches the ground truth labels to the model's anchor boxes and encodes the box coordinates into the format that the model needs.In order to train the model on a dataset other than Pascal VOC, either choose `DataGenerator`'s appropriate parser method that corresponds to your data format, or, if `DataGenerator` does not provide a suitable parser for your data format, you can write an additional parser and add it. Out of the box, `DataGenerator` can handle datasets that use the Pascal VOC format (use `parse_xml()`), the MS COCO format (use `parse_json()`) and a wide range of CSV formats (use `parse_csv()`). ###Code # 1: Instantiate two `DataGenerator` objects: One for training, one for validation. # Optional: If you have enough memory, consider loading the images into memory for the reasons explained above. train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None) val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None) # 2: Parse the image and label lists for the training and validation datasets. This can take a while. # TODO: Set the paths to the datasets here. # The directories that contain the images. VOC_2007_images_dir = '/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/VOCTemplate/VOC2019/JPEGImages/' # The directories that contain the annotations. VOC_2007_annotations_dir = '/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/VOCTemplate/VOC2019/Annotations/' # The paths to the image sets. VOC_2007_train_image_set_filename = '/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/VOCTemplate/VOC2019//ImageSets/Main/train.txt' VOC_2007_val_image_set_filename = '/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/VOCTemplate/VOC2019/ImageSets/Main/val.txt' VOC_2007_trainval_image_set_filename = '/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/VOCTemplate/VOC2019/ImageSets/Main/trainval.txt' VOC_2007_test_image_set_filename = '/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/VOCTemplate/VOC2019/ImageSets/Main/test.txt' # The XML parser needs to now what object class names to look for and in which order to map them to integers. classes = ['crowd', 'civilian', 'soldier', 'civil vehicle'] train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir], image_set_filenames=[VOC_2007_trainval_image_set_filename], annotations_dirs=[VOC_2007_annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=False, ret=False) val_dataset.parse_xml(images_dirs=[VOC_2007_images_dir], image_set_filenames=[VOC_2007_trainval_image_set_filename], annotations_dirs=[VOC_2007_annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=True, ret=False) # 3: Set the batch size. batch_size = 8 # Change the batch size if you like, or if you run into GPU memory issues. # 4: Set the image transformations for pre-processing and data augmentation options. # For the training generator: ssd_data_augmentation = SSDDataAugmentation(img_height=img_height, img_width=img_width, background=mean_color) # For the validation generator: convert_to_3_channels = ConvertTo3Channels() resize = Resize(height=img_height, width=img_width) # 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function. # The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes. predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3], model.get_layer('fc7_mbox_conf').output_shape[1:3], model.get_layer('conv6_2_mbox_conf').output_shape[1:3], model.get_layer('conv7_2_mbox_conf').output_shape[1:3], model.get_layer('conv8_2_mbox_conf').output_shape[1:3], model.get_layer('conv9_2_mbox_conf').output_shape[1:3]] ssd_input_encoder = SSDInputEncoder(img_height=img_height, img_width=img_width, n_classes=n_classes, predictor_sizes=predictor_sizes, scales=scales, aspect_ratios_per_layer=aspect_ratios, two_boxes_for_ar1=two_boxes_for_ar1, steps=steps, offsets=offsets, clip_boxes=clip_boxes, variances=variances, matching_type='multi', pos_iou_threshold=0.5, neg_iou_limit=0.5, normalize_coords=normalize_coords) # 6: Create the generator handles that will be passed to Keras' `fit_generator()` function. train_generator = train_dataset.generate(batch_size=batch_size, shuffle=True, transformations=[ssd_data_augmentation], label_encoder=ssd_input_encoder, returns={'processed_images', 'encoded_labels'}, keep_images_without_gt=False) val_generator = val_dataset.generate(batch_size=batch_size, shuffle=False, transformations=[convert_to_3_channels, resize], label_encoder=ssd_input_encoder, returns={'processed_images', 'encoded_labels'}, keep_images_without_gt=False) # Get the number of samples in the training and validations datasets. train_dataset_size = train_dataset.get_dataset_size() val_dataset_size = val_dataset.get_dataset_size() print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size)) print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size)) ###Output Number of images in the training dataset: 1208 Number of images in the validation dataset: 1208 ###Markdown 4. Set the remaining training parametersWe've already chosen an optimizer and set the batch size above, now let's set the remaining training parameters. I'll set one epoch to consist of 1,000 training steps. The next code cell defines a learning rate schedule that replicates the learning rate schedule of the original Caffe implementation for the training of the SSD300 Pascal VOC "07+12" model. That model was trained for 120,000 steps with a learning rate of 0.001 for the first 80,000 steps, 0.0001 for the next 20,000 steps, and 0.00001 for the last 20,000 steps. If you're training on a different dataset, define the learning rate schedule however you see fit.I'll set only a few essential Keras callbacks below, feel free to add more callbacks if you want TensorBoard summaries or whatever. We obviously need the learning rate scheduler and we want to save the best models during the training. It also makes sense to continuously stream our training history to a CSV log file after every epoch, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Finally, we'll also add a callback that makes sure that the training terminates if the loss becomes `NaN`. Depending on the optimizer you use, it can happen that the loss becomes `NaN` during the first iterations of the training. In later iterations it's less of a risk. For example, I've never seen a `NaN` loss when I trained SSD using an Adam optimizer, but I've seen a `NaN` loss a couple of times during the very first couple of hundred training steps of training a new model when I used an SGD optimizer. ###Code # Define a learning rate schedule. def lr_schedule(epoch): if epoch < 80: return 0.001 elif epoch < 100: return 0.0001 else: return 0.00001 # Define model callbacks. # TODO: Set the filepath under which you want to save the model. model_checkpoint = ModelCheckpoint(filepath='ssd300_pascal_07+12_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1) #model_checkpoint.best = csv_logger = CSVLogger(filename='ssd300_pascal_07+12_training_log.csv', separator=',', append=True) learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule, verbose=1) terminate_on_nan = TerminateOnNaN() callbacks = [model_checkpoint, csv_logger, learning_rate_scheduler, terminate_on_nan] ###Output _____no_output_____ ###Markdown 5. Train In order to reproduce the training of the "07+12" model mentioned above, at 1,000 training steps per epoch you'd have to train for 120 epochs. That is going to take really long though, so you might not want to do all 120 epochs in one go and instead train only for a few epochs at a time. You can find a summary of a full training [here](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md).In order to only run a partial training and resume smoothly later on, there are a few things you should note:1. Always load the full model if you can, rather than building a new model and loading previously saved weights into it. Optimizers like SGD or Adam keep running averages of past gradient moments internally. If you always save and load full models when resuming a training, then the state of the optimizer is maintained and the training picks up exactly where it left off. If you build a new model and load weights into it, the optimizer is being initialized from scratch, which, especially in the case of Adam, leads to small but unnecessary setbacks every time you resume the training with previously saved weights.2. In order for the learning rate scheduler callback above to work properly, `fit_generator()` needs to know which epoch we're in, otherwise it will start with epoch 0 every time you resume the training. Set `initial_epoch` to be the next epoch of your training. Note that this parameter is zero-based, i.e. the first epoch is epoch 0. If you had trained for 10 epochs previously and now you'd want to resume the training from there, you'd set `initial_epoch = 10` (since epoch 10 is the eleventh epoch). Furthermore, set `final_epoch` to the last epoch you want to run. To stick with the previous example, if you had trained for 10 epochs previously and now you'd want to train for another 10 epochs, you'd set `initial_epoch = 10` and `final_epoch = 20`.3. In order for the model checkpoint callback above to work correctly after a kernel restart, set `model_checkpoint.best` to the best validation loss from the previous training. If you don't do this and a new `ModelCheckpoint` object is created after a kernel restart, that object obviously won't know what the last best validation loss was, so it will always save the weights of the first epoch of your new training and record that loss as its new best loss. This isn't super-important, I just wanted to mention it. ###Code # If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly. initial_epoch = 20 final_epoch = 50 steps_per_epoch = 1000 history = model.fit_generator(generator=train_generator, steps_per_epoch=steps_per_epoch, epochs=final_epoch, callbacks=callbacks, validation_data=val_generator, validation_steps=ceil(val_dataset_size/batch_size), initial_epoch=initial_epoch) ###Output Epoch 21/50 Epoch 00021: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1707s 2s/step - loss: 4.4129 - val_loss: 4.1705 Epoch 00021: val_loss improved from inf to 4.17049, saving model to ssd300_pascal_07+12_epoch-21_loss-4.4129_val_loss-4.1705.h5 Epoch 22/50 Epoch 00022: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1671s 2s/step - loss: 4.4192 - val_loss: 4.0589 Epoch 00022: val_loss improved from 4.17049 to 4.05891, saving model to ssd300_pascal_07+12_epoch-22_loss-4.4192_val_loss-4.0589.h5 Epoch 23/50 Epoch 00023: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1680s 2s/step - loss: 4.3822 - val_loss: 4.0339 Epoch 00023: val_loss improved from 4.05891 to 4.03394, saving model to ssd300_pascal_07+12_epoch-23_loss-4.3822_val_loss-4.0339.h5 Epoch 24/50 Epoch 00024: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1654s 2s/step - loss: 4.3622 - val_loss: 3.9372 Epoch 00024: val_loss improved from 4.03394 to 3.93716, saving model to ssd300_pascal_07+12_epoch-24_loss-4.3622_val_loss-3.9372.h5 Epoch 25/50 Epoch 00025: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1691s 2s/step - loss: 4.3733 - val_loss: 4.0453 Epoch 00025: val_loss did not improve from 3.93716 Epoch 26/50 Epoch 00026: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1648s 2s/step - loss: 4.3475 - val_loss: 3.8964 Epoch 00026: val_loss improved from 3.93716 to 3.89638, saving model to ssd300_pascal_07+12_epoch-26_loss-4.3475_val_loss-3.8964.h5 Epoch 27/50 Epoch 00027: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1581s 2s/step - loss: 4.3434 - val_loss: 3.9428 Epoch 00027: val_loss did not improve from 3.89638 Epoch 28/50 Epoch 00028: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1603s 2s/step - loss: 4.3041 - val_loss: 3.8453 Epoch 00028: val_loss improved from 3.89638 to 3.84530, saving model to ssd300_pascal_07+12_epoch-28_loss-4.3041_val_loss-3.8453.h5 Epoch 29/50 Epoch 00029: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1583s 2s/step - loss: 4.2808 - val_loss: 3.8242 Epoch 00029: val_loss improved from 3.84530 to 3.82415, saving model to ssd300_pascal_07+12_epoch-29_loss-4.2808_val_loss-3.8242.h5 Epoch 30/50 Epoch 00030: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1584s 2s/step - loss: 4.3028 - val_loss: 3.8775 Epoch 00030: val_loss did not improve from 3.82415 Epoch 31/50 Epoch 00031: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1572s 2s/step - loss: 4.2672 - val_loss: 3.7923 Epoch 00031: val_loss improved from 3.82415 to 3.79231, saving model to ssd300_pascal_07+12_epoch-31_loss-4.2672_val_loss-3.7923.h5 Epoch 32/50 Epoch 00032: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1591s 2s/step - loss: 4.2717 - val_loss: 3.8525 Epoch 00032: val_loss did not improve from 3.79231 Epoch 33/50 Epoch 00033: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1592s 2s/step - loss: 4.2573 - val_loss: 3.7552 Epoch 00033: val_loss improved from 3.79231 to 3.75520, saving model to ssd300_pascal_07+12_epoch-33_loss-4.2573_val_loss-3.7552.h5 Epoch 34/50 Epoch 00034: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1575s 2s/step - loss: 4.2571 - val_loss: 3.8604 Epoch 00034: val_loss did not improve from 3.75520 Epoch 35/50 Epoch 00035: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1579s 2s/step - loss: 4.2437 - val_loss: 3.9125 Epoch 00035: val_loss did not improve from 3.75520 Epoch 36/50 Epoch 00036: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1606s 2s/step - loss: 4.2285 - val_loss: 3.7145 Epoch 00036: val_loss improved from 3.75520 to 3.71446, saving model to ssd300_pascal_07+12_epoch-36_loss-4.2285_val_loss-3.7145.h5 Epoch 37/50 Epoch 00037: LearningRateScheduler setting learning rate to 0.001. 1000/1000 [==============================] - 1670s 2s/step - loss: 4.2319 - val_loss: 3.7147 Epoch 00037: val_loss did not improve from 3.71446 Epoch 38/50 Epoch 00038: LearningRateScheduler setting learning rate to 0.001. 513/1000 [==============>...............] - ETA: 12:01 - loss: 4.2164 ###Markdown 6. Make predictionsNow let's make some predictions on the validation dataset with the trained model. For convenience we'll use the validation generator that we've already set up above. Feel free to change the batch size.You can set the `shuffle` option to `False` if you would like to check the model's progress on the same image(s) over the course of the training. ###Code # 1: Set the generator for the predictions. predict_generator = val_dataset.generate(batch_size=1, shuffle=True, transformations=[convert_to_3_channels, resize], label_encoder=None, returns={'processed_images', 'filenames', 'inverse_transform', 'original_images', 'original_labels'}, keep_images_without_gt=False) # 2: Generate samples. batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator) i = 0 # Which batch item to look at print("Image:", batch_filenames[i]) print() print("Ground truth boxes:\n") print(np.array(batch_original_labels[i])) # 3: Make predictions. y_pred = model.predict(batch_images) ###Output _____no_output_____ ###Markdown Now let's decode the raw predictions in `y_pred`.Had we created the model in 'inference' or 'inference_fast' mode, then the model's final layer would be a `DecodeDetections` layer and `y_pred` would already contain the decoded predictions, but since we created the model in 'training' mode, the model outputs raw predictions that still need to be decoded and filtered. This is what the `decode_detections()` function is for. It does exactly what the `DecodeDetections` layer would do, but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).`decode_detections()` with default argument values follows the procedure of the original SSD implementation: First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes, then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45, and out of what is left after that, the top 200 highest confidence boxes are returned. Those settings are for precision-recall scoring purposes though. In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5, since we're only interested in the very confident predictions. ###Code # 4: Decode the raw predictions in `y_pred`. y_pred_decoded = decode_detections(y_pred, confidence_thresh=0.3, iou_threshold=0.4, top_k=200, normalize_coords=normalize_coords, img_height=img_height, img_width=img_width) ###Output _____no_output_____ ###Markdown We made the predictions on the resized images, but we'd like to visualize the outcome on the original input images, so we'll convert the coordinates accordingly. Don't worry about that opaque `apply_inverse_transforms()` function below, in this simple case it just aplies `(* original_image_size / resized_image_size)` to the box coordinates. ###Code # 5: Convert the predictions for the original image. y_pred_decoded_inv = apply_inverse_transforms(y_pred_decoded, batch_inverse_transforms) np.set_printoptions(precision=2, suppress=True, linewidth=90) print("Predicted boxes:\n") print(' class conf xmin ymin xmax ymax') print(y_pred_decoded_inv[i]) ###Output Predicted boxes: class conf xmin ymin xmax ymax [[ 1. 0.38 415. 177. 644. 718. ] [ 1. 0.35 242. 190. 440. 716. ] [ 1. 0.34 89. 278. 285. 720. ] [ 2. 0.31 237. 190. 447. 773. ]] ###Markdown Finally, let's draw the predicted boxes onto the image. Each predicted box says its confidence next to the category name. The ground truth boxes are also drawn onto the image in green for comparison. ###Code # 5: Draw the predicted boxes onto the image # Set the colors for the bounding boxes colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist() classes = ['crowd', 'civilian', 'soldier', 'civil vehicle'] plt.figure(figsize=(20,12)) plt.imshow(batch_original_images[i]) current_axis = plt.gca() for box in batch_original_labels[i]: xmin = box[1] ymin = box[2] xmax = box[3] ymax = box[4] label = '{}'.format(classes[int(box[0])]) current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2)) current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0}) for box in y_pred_decoded_inv[i]: xmin = box[2] ymin = box[3] xmax = box[4] ymax = box[5] color = colors[int(box[0])] label = '{}: {:.2f}'.format(classes[int(box[0])], box[1]) current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2)) current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0}) ###Output _____no_output_____ ###Markdown SSD300 Training TutorialThis tutorial explains how to train an SSD300 on the Pascal VOC datasets. The preset parameters reproduce the training of the original SSD300 "07+12" model. Training SSD512 works simiarly, so there's no extra tutorial for that. The same goes for training on other datasets.You can find a summary of a full training here to get an impression of what it should look like:[SSD300 "07+12" training summary](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md) ###Code from keras.optimizers import Adam, SGD from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger from keras import backend as K from keras.models import load_model from math import ceil import numpy as np from matplotlib import pyplot as plt from models.keras_ssd300 import ssd_300 from keras_loss_function.keras_ssd_loss import SSDLoss from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes from keras_layers.keras_layer_DecodeDetections import DecodeDetections from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast from keras_layers.keras_layer_L2Normalization import L2Normalization from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast from data_generator.object_detection_2d_data_generator import DataGenerator from data_generator.object_detection_2d_geometric_ops import Resize from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms %matplotlib inline ###Output _____no_output_____ ###Markdown 0. Preliminary noteAll places in the code where you need to make any changes are marked `TODO` and explained accordingly. All code cells that don't contain `TODO` markers just need to be executed. 1. Set the model configuration parametersThis section sets the configuration parameters for the model definition. The parameters set here are being used both by the `ssd_300()` function that builds the SSD300 model as well as further down by the constructor for the `SSDInputEncoder` object that is needed to run the training. Most of these parameters are needed to define the anchor boxes.The parameters as set below produce the original SSD300 architecture that was trained on the Pascal VOC datsets, i.e. they are all chosen to correspond exactly to their respective counterparts in the `.prototxt` file that defines the original Caffe implementation. Note that the anchor box scaling factors of the original SSD implementation vary depending on the datasets on which the models were trained. The scaling factors used for the MS COCO datasets are smaller than the scaling factors used for the Pascal VOC datasets. The reason why the list of scaling factors has 7 elements while there are only 6 predictor layers is that the last scaling factor is used for the second aspect-ratio-1 box of the last predictor layer. Refer to the documentation for details.As mentioned above, the parameters set below are not only needed to build the model, but are also passed to the `SSDInputEncoder` constructor further down, which is responsible for matching and encoding ground truth boxes and anchor boxes during the training. In order to do that, it needs to know the anchor box parameters. ###Code img_height = 300 # Height of the model input images img_width = 300 # Width of the model input images img_channels = 3 # Number of color channels of the model input images mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights. swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images. n_classes = 20 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets scales = scales_pascal aspect_ratios = [[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters two_boxes_for_ar1 = True steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer. offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer. clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation normalize_coords = True ###Output _____no_output_____ ###Markdown 2. Build or load the modelYou will want to execute either of the two code cells in the subsequent two sub-sections, not both. 2.1 Create a new model and load trained VGG-16 weights into it (or trained SSD weights)If you want to create a new SSD300 model, this is the relevant section for you. If you want to load a previously saved SSD300 model, skip ahead to section 2.2.The code cell below does the following things:1. It calls the function `ssd_300()` to build the model.2. It then loads the weights file that is found at `weights_path` into the model. You could load the trained VGG-16 weights or you could load the weights of a trained model. If you want to reproduce the original SSD training, load the pre-trained VGG-16 weights. In any case, you need to set the path to the weights file you want to load on your local machine. Download links to all the trained weights are provided in the [README](https://github.com/pierluigiferrari/ssd_keras/blob/master/README.md) of this repository.3. Finally, it compiles the model for the training. In order to do so, we're defining an optimizer (Adam) and a loss function (SSDLoss) to be passed to the `compile()` method.Normally, the optimizer of choice would be Adam (commented out below), but since the original implementation uses plain SGD with momentum, we'll do the same in order to reproduce the original training. Adam is generally the superior optimizer, so if your goal is not to have everything exactly as in the original training, feel free to switch to Adam. You might need to adjust the learning rate scheduler below slightly in case you use Adam.Note that the learning rate that is being set here doesn't matter, because further below we'll pass a learning rate scheduler to the training function, which will overwrite any learning rate set here, i.e. what matters are the learning rates that are defined by the learning rate scheduler.`SSDLoss` is a custom Keras loss function that implements the multi-task that consists of a log loss for classification and a smooth L1 loss for localization. `neg_pos_ratio` and `alpha` are set as in the paper. ###Code # 1: Build the Keras model. K.clear_session() # Clear previous models from memory. model = ssd_300(image_size=(img_height, img_width, img_channels), n_classes=n_classes, mode='training', l2_regularization=0.0005, scales=scales, aspect_ratios_per_layer=aspect_ratios, two_boxes_for_ar1=two_boxes_for_ar1, steps=steps, offsets=offsets, clip_boxes=clip_boxes, variances=variances, normalize_coords=normalize_coords, subtract_mean=mean_color, swap_channels=swap_channels) # 2: Load some weights into the model. # TODO: Set the path to the weights you want to load. weights_path = 'path/to/VGG_ILSVRC_16_layers_fc_reduced.h5' model.load_weights(weights_path, by_name=True) # 3: Instantiate an optimizer and the SSD loss function and compile the model. # If you want to follow the original Caffe implementation, use the preset SGD # optimizer, otherwise I'd recommend the commented-out Adam optimizer. #adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) sgd = SGD(lr=0.001, momentum=0.9, decay=0.0, nesterov=False) ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) model.compile(optimizer=sgd, loss=ssd_loss.compute_loss) ###Output _____no_output_____ ###Markdown 2.2 Load a previously created modelIf you have previously created and saved a model and would now like to load it, execute the next code cell. The only thing you need to do here is to set the path to the saved model HDF5 file that you would like to load.The SSD model contains custom objects: Neither the loss function nor the anchor box or L2-normalization layer types are contained in the Keras core library, so we need to provide them to the model loader.This next code cell assumes that you want to load a model that was created in 'training' mode. If you want to load a model that was created in 'inference' or 'inference_fast' mode, you'll have to add the `DecodeDetections` or `DecodeDetectionsFast` layer type to the `custom_objects` dictionary below. ###Code # TODO: Set the path to the `.h5` file of the model to be loaded. model_path = 'path/to/trained/model.h5' # We need to create an SSDLoss object in order to pass that to the model loader. ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) K.clear_session() # Clear previous models from memory. model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes, 'L2Normalization': L2Normalization, 'compute_loss': ssd_loss.compute_loss}) ###Output _____no_output_____ ###Markdown 3. Set up the data generators for the trainingThe code cells below set up the data generators for the training and validation datasets to train the model. The settings below reproduce the original SSD training on Pascal VOC 2007 `trainval` plus 2012 `trainval` and validation on Pascal VOC 2007 `test`.The only thing you need to change here are the filepaths to the datasets on your local machine. Note that parsing the labels from the XML annotations files can take a while.Note that the generator provides two options to speed up the training. By default, it loads the individual images for a batch from disk. This has two disadvantages. First, for compressed image formats like JPG, this is a huge computational waste, because every image needs to be decompressed again and again every time it is being loaded. Second, the images on disk are likely not stored in a contiguous block of memory, which may also slow down the loading process. The first option that `DataGenerator` provides to deal with this is to load the entire dataset into memory, which reduces the access time for any image to a negligible amount, but of course this is only an option if you have enough free memory to hold the whole dataset. As a second option, `DataGenerator` provides the possibility to convert the dataset into a single HDF5 file. This HDF5 file stores the images as uncompressed arrays in a contiguous block of memory, which dramatically speeds up the loading time. It's not as good as having the images in memory, but it's a lot better than the default option of loading them from their compressed JPG state every time they are needed. Of course such an HDF5 dataset may require significantly more disk space than the compressed images (around 9 GB total for Pascal VOC 2007 `trainval` plus 2012 `trainval` and another 2.6 GB for 2007 `test`). You can later load these HDF5 datasets directly in the constructor.The original SSD implementation uses a batch size of 32 for the training. In case you run into GPU memory issues, reduce the batch size accordingly. You need at least 7 GB of free GPU memory to train an SSD300 with 20 object classes with a batch size of 32.The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.The data augmentation settings defined further down reproduce the data augmentation pipeline of the original SSD training. The training generator receives an object `ssd_data_augmentation`, which is a transformation object that is itself composed of a whole chain of transformations that replicate the data augmentation procedure used to train the original Caffe implementation. The validation generator receives an object `resize`, which simply resizes the input images.An `SSDInputEncoder` object, `ssd_input_encoder`, is passed to both the training and validation generators. As explained above, it matches the ground truth labels to the model's anchor boxes and encodes the box coordinates into the format that the model needs.In order to train the model on a dataset other than Pascal VOC, either choose `DataGenerator`'s appropriate parser method that corresponds to your data format, or, if `DataGenerator` does not provide a suitable parser for your data format, you can write an additional parser and add it. Out of the box, `DataGenerator` can handle datasets that use the Pascal VOC format (use `parse_xml()`), the MS COCO format (use `parse_json()`) and a wide range of CSV formats (use `parse_csv()`). ###Code # 1: Instantiate two `DataGenerator` objects: One for training, one for validation. # Optional: If you have enough memory, consider loading the images into memory for the reasons explained above. train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None) val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None) # 2: Parse the image and label lists for the training and validation datasets. This can take a while. # TODO: Set the paths to the datasets here. # The directories that contain the images. VOC_2007_images_dir = '../../datasets/VOCdevkit/VOC2007/JPEGImages/' VOC_2012_images_dir = '../../datasets/VOCdevkit/VOC2012/JPEGImages/' # The directories that contain the annotations. VOC_2007_annotations_dir = '../../datasets/VOCdevkit/VOC2007/Annotations/' VOC_2012_annotations_dir = '../../datasets/VOCdevkit/VOC2012/Annotations/' # The paths to the image sets. VOC_2007_train_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/train.txt' VOC_2012_train_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/train.txt' VOC_2007_val_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/val.txt' VOC_2012_val_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/val.txt' VOC_2007_trainval_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/trainval.txt' VOC_2012_trainval_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/trainval.txt' VOC_2007_test_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/test.txt' # The XML parser needs to now what object class names to look for and in which order to map them to integers. classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir, VOC_2012_images_dir], image_set_filenames=[VOC_2007_trainval_image_set_filename, VOC_2012_trainval_image_set_filename], annotations_dirs=[VOC_2007_annotations_dir, VOC_2012_annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=False, ret=False) val_dataset.parse_xml(images_dirs=[VOC_2007_images_dir], image_set_filenames=[VOC_2007_test_image_set_filename], annotations_dirs=[VOC_2007_annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=True, ret=False) # Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will # speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory` # option in the constructor, because in that cas the images are in memory already anyway. If you don't # want to create HDF5 datasets, comment out the subsequent two function calls. train_dataset.create_hdf5_dataset(file_path='dataset_pascal_voc_07+12_trainval.h5', resize=False, variable_image_size=True, verbose=True) val_dataset.create_hdf5_dataset(file_path='dataset_pascal_voc_07_test.h5', resize=False, variable_image_size=True, verbose=True) # 3: Set the batch size. batch_size = 32 # Change the batch size if you like, or if you run into GPU memory issues. # 4: Set the image transformations for pre-processing and data augmentation options. # For the training generator: ssd_data_augmentation = SSDDataAugmentation(img_height=img_height, img_width=img_width, background=mean_color) # For the validation generator: convert_to_3_channels = ConvertTo3Channels() resize = Resize(height=img_height, width=img_width) # 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function. # The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes. predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3], model.get_layer('fc7_mbox_conf').output_shape[1:3], model.get_layer('conv6_2_mbox_conf').output_shape[1:3], model.get_layer('conv7_2_mbox_conf').output_shape[1:3], model.get_layer('conv8_2_mbox_conf').output_shape[1:3], model.get_layer('conv9_2_mbox_conf').output_shape[1:3]] ssd_input_encoder = SSDInputEncoder(img_height=img_height, img_width=img_width, n_classes=n_classes, predictor_sizes=predictor_sizes, scales=scales, aspect_ratios_per_layer=aspect_ratios, two_boxes_for_ar1=two_boxes_for_ar1, steps=steps, offsets=offsets, clip_boxes=clip_boxes, variances=variances, matching_type='multi', pos_iou_threshold=0.5, neg_iou_limit=0.5, normalize_coords=normalize_coords) # 6: Create the generator handles that will be passed to Keras' `fit_generator()` function. train_generator = train_dataset.generate(batch_size=batch_size, shuffle=True, transformations=[ssd_data_augmentation], label_encoder=ssd_input_encoder, returns={'processed_images', 'encoded_labels'}, keep_images_without_gt=False) val_generator = val_dataset.generate(batch_size=batch_size, shuffle=False, transformations=[convert_to_3_channels, resize], label_encoder=ssd_input_encoder, returns={'processed_images', 'encoded_labels'}, keep_images_without_gt=False) # Get the number of samples in the training and validations datasets. train_dataset_size = train_dataset.get_dataset_size() val_dataset_size = val_dataset.get_dataset_size() print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size)) print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size)) ###Output Number of images in the training dataset: 16551 Number of images in the validation dataset: 4952 ###Markdown 4. Set the remaining training parametersWe've already chosen an optimizer and set the batch size above, now let's set the remaining training parameters. I'll set one epoch to consist of 1,000 training steps. The next code cell defines a learning rate schedule that replicates the learning rate schedule of the original Caffe implementation for the training of the SSD300 Pascal VOC "07+12" model. That model was trained for 120,000 steps with a learning rate of 0.001 for the first 80,000 steps, 0.0001 for the next 20,000 steps, and 0.00001 for the last 20,000 steps. If you're training on a different dataset, define the learning rate schedule however you see fit.I'll set only a few essential Keras callbacks below, feel free to add more callbacks if you want TensorBoard summaries or whatever. We obviously need the learning rate scheduler and we want to save the best models during the training. It also makes sense to continuously stream our training history to a CSV log file after every epoch, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Finally, we'll also add a callback that makes sure that the training terminates if the loss becomes `NaN`. Depending on the optimizer you use, it can happen that the loss becomes `NaN` during the first iterations of the training. In later iterations it's less of a risk. For example, I've never seen a `NaN` loss when I trained SSD using an Adam optimizer, but I've seen a `NaN` loss a couple of times during the very first couple of hundred training steps of training a new model when I used an SGD optimizer. ###Code # Define a learning rate schedule. def lr_schedule(epoch): if epoch < 80: return 0.001 elif epoch < 100: return 0.0001 else: return 0.00001 # Define model callbacks. # TODO: Set the filepath under which you want to save the model. model_checkpoint = ModelCheckpoint(filepath='ssd300_pascal_07+12_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1) #model_checkpoint.best = csv_logger = CSVLogger(filename='ssd300_pascal_07+12_training_log.csv', separator=',', append=True) learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule, verbose=1) terminate_on_nan = TerminateOnNaN() callbacks = [model_checkpoint, csv_logger, learning_rate_scheduler, terminate_on_nan] ###Output _____no_output_____ ###Markdown 5. Train In order to reproduce the training of the "07+12" model mentioned above, at 1,000 training steps per epoch you'd have to train for 120 epochs. That is going to take really long though, so you might not want to do all 120 epochs in one go and instead train only for a few epochs at a time. You can find a summary of a full training [here](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md).In order to only run a partial training and resume smoothly later on, there are a few things you should note:1. Always load the full model if you can, rather than building a new model and loading previously saved weights into it. Optimizers like SGD or Adam keep running averages of past gradient moments internally. If you always save and load full models when resuming a training, then the state of the optimizer is maintained and the training picks up exactly where it left off. If you build a new model and load weights into it, the optimizer is being initialized from scratch, which, especially in the case of Adam, leads to small but unnecessary setbacks every time you resume the training with previously saved weights.2. In order for the learning rate scheduler callback above to work properly, `fit_generator()` needs to know which epoch we're in, otherwise it will start with epoch 0 every time you resume the training. Set `initial_epoch` to be the next epoch of your training. Note that this parameter is zero-based, i.e. the first epoch is epoch 0. If you had trained for 10 epochs previously and now you'd want to resume the training from there, you'd set `initial_epoch = 10` (since epoch 10 is the eleventh epoch). Furthermore, set `final_epoch` to the last epoch you want to run. To stick with the previous example, if you had trained for 10 epochs previously and now you'd want to train for another 10 epochs, you'd set `initial_epoch = 10` and `final_epoch = 20`.3. In order for the model checkpoint callback above to work correctly after a kernel restart, set `model_checkpoint.best` to the best validation loss from the previous training. If you don't do this and a new `ModelCheckpoint` object is created after a kernel restart, that object obviously won't know what the last best validation loss was, so it will always save the weights of the first epoch of your new training and record that loss as its new best loss. This isn't super-important, I just wanted to mention it. ###Code # If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly. initial_epoch = 0 final_epoch = 120 steps_per_epoch = 1000 history = model.fit_generator(generator=train_generator, steps_per_epoch=steps_per_epoch, epochs=final_epoch, callbacks=callbacks, validation_data=val_generator, validation_steps=ceil(val_dataset_size/batch_size), initial_epoch=initial_epoch) ###Output _____no_output_____ ###Markdown 6. Make predictionsNow let's make some predictions on the validation dataset with the trained model. For convenience we'll use the validation generator that we've already set up above. Feel free to change the batch size.You can set the `shuffle` option to `False` if you would like to check the model's progress on the same image(s) over the course of the training. ###Code # 1: Set the generator for the predictions. predict_generator = val_dataset.generate(batch_size=1, shuffle=True, transformations=[convert_to_3_channels, resize], label_encoder=None, returns={'processed_images', 'filenames', 'inverse_transform', 'original_images', 'original_labels'}, keep_images_without_gt=False) # 2: Generate samples. batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator) i = 0 # Which batch item to look at print("Image:", batch_filenames[i]) print() print("Ground truth boxes:\n") print(np.array(batch_original_labels[i])) # 3: Make predictions. y_pred = model.predict(batch_images) ###Output _____no_output_____ ###Markdown Now let's decode the raw predictions in `y_pred`.Had we created the model in 'inference' or 'inference_fast' mode, then the model's final layer would be a `DecodeDetections` layer and `y_pred` would already contain the decoded predictions, but since we created the model in 'training' mode, the model outputs raw predictions that still need to be decoded and filtered. This is what the `decode_detections()` function is for. It does exactly what the `DecodeDetections` layer would do, but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).`decode_detections()` with default argument values follows the procedure of the original SSD implementation: First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes, then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45, and out of what is left after that, the top 200 highest confidence boxes are returned. Those settings are for precision-recall scoring purposes though. In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5, since we're only interested in the very confident predictions. ###Code # 4: Decode the raw predictions in `y_pred`. y_pred_decoded = decode_detections(y_pred, confidence_thresh=0.5, iou_threshold=0.4, top_k=200, normalize_coords=normalize_coords, img_height=img_height, img_width=img_width) ###Output _____no_output_____ ###Markdown We made the predictions on the resized images, but we'd like to visualize the outcome on the original input images, so we'll convert the coordinates accordingly. Don't worry about that opaque `apply_inverse_transforms()` function below, in this simple case it just aplies `(* original_image_size / resized_image_size)` to the box coordinates. ###Code # 5: Convert the predictions for the original image. y_pred_decoded_inv = apply_inverse_transforms(y_pred_decoded, batch_inverse_transforms) np.set_printoptions(precision=2, suppress=True, linewidth=90) print("Predicted boxes:\n") print(' class conf xmin ymin xmax ymax') print(y_pred_decoded_inv[i]) ###Output Predicted boxes: class conf xmin ymin xmax ymax [[ 9. 0.8 364.79 5.24 496.51 203.59] [ 12. 1. 115.44 50. 384.22 330.76] [ 12. 0.86 68.99 212.78 331.63 355.72] [ 15. 0.95 2.62 20.18 235.83 253.07]] ###Markdown Finally, let's draw the predicted boxes onto the image. Each predicted box says its confidence next to the category name. The ground truth boxes are also drawn onto the image in green for comparison. ###Code # 5: Draw the predicted boxes onto the image # Set the colors for the bounding boxes colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist() classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] plt.figure(figsize=(20,12)) plt.imshow(batch_original_images[i]) current_axis = plt.gca() for box in batch_original_labels[i]: xmin = box[1] ymin = box[2] xmax = box[3] ymax = box[4] label = '{}'.format(classes[int(box[0])]) current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2)) current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0}) for box in y_pred_decoded_inv[i]: xmin = box[2] ymin = box[3] xmax = box[4] ymax = box[5] color = colors[int(box[0])] label = '{}: {:.2f}'.format(classes[int(box[0])], box[1]) current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2)) current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0}) ###Output _____no_output_____
Espanol/DesigningDESI_es.ipynb
###Markdown OII y más galaxias Ya sea observando cada vez más distantes o recolectando aquellas que hemos perdido anteriormente, ¡nuestra ciencia siempre mejora con más galaxias! El problema es que ya hemos analizado todas las galaxias fáciles y brillantes y las cosas se ponen más difíciles a medida que nos vemos obligados a observar las galaxias más débiles que conocemos. Tenemos que ser inteligentes sobre cómo hacemos esto y, a veces, se presenta una oportunidad sorprendente ... _Advertencia: este cuaderno aumenta la dificultad para permitirnos diseñar experimentos más divertidos basados ​​en lo que aprenderás aquí. Si tienes algún problema, [pregunta](https://github.com/michaelJwilson/DESI-HighSchool/issues/new/choose). ¡Quédate con nosotros!_ ¿Estás cansado de escuchar a tus padres? Los átomos sienten lo mismo. Su vida es una serie de reglas, reglas, reglas. Haz esto, no hagas aquello; la [lista](https://es.wikipedia.org/wiki/Transición_electrónica) es larga. Pero a veces, se cansan y se rebelan, ![title](images/Climate.png) Resulta que una rebelión, de vez en cuando, puede ser algo bueno. Por ejemplo, el oxígeno (doblemente) ionizado o [OII], (increíblemente raramente) emite un doblete único que no lo haría [de otra manera](https://es.wikipedia.org/wiki/L%C3%ADnea_prohibida). Veamos qué pasa. ###Code # Longitudes de onda del doblete OII. lambdaa = 3727.092 # Angstroms lambdab = 3729.875 # Angstorms # Energía promedio ponderado. OII = 3728.483 # Anchura de cada línea debido al ensanchamiento térmico. def width(center, dv): # diferencia de velocidad [velocidad de la luz] return center * dv wave = np.arange(1000, 1.e4, 0.05) dlambdaa = width(lambdaa, 1.e-4) dlambdab = width(lambdab, 1.e-4) def gaussian(wave, center, width): # https://es.wikipedia.org/wiki/Función_gaussiana norm = np.sqrt(2. * np.pi / width) return np.exp(-0.5 * (wave - center)**2. / width**2.) ax = pl.gca() ax.fill_between(wave, 0., gaussian(wave, lambdaa, dlambdaa), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, lambdab, dlambdab), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, 3889.0, width(3889.0, 1.e-4)), color='k', alpha=1.) pl.xlim(3700., 3900.) pl.ylim(0.25, 1.1) pl.xlabel('Longitud de onda [AA]') pl.ylabel('Flujo normalizado') ###Output _____no_output_____ ###Markdown Primero, las transiciones _prohibidas_ [OII] (azul) representan un doblete de dos líneas poco espaciadas. Estas tienen un ancho finito ya que las estrellas emisoras se mueven (al 0.01% en este ejemplo), lo que lleva al ensanchamiento Doppler habitual. Contrasta esto con la línea negra He I, que es una sola línea o "singlete". El problema es que una sola línea emitida por una galaxia a un desplazamiento al rojo dado puede parecer una línea diferente a otro desplazamiento al rojo. Tu turno, si hubiera un emisor Lyman-$\alpha$ en $z=4.0$, ¿podrías notar la diferencia de un emisor H-$\alpha$ (6564.61 Angstroms) a un desplazamiento al rojo diferente? ¿Qué desplazamiento al rojo tendría esta segunda galaxia? Recuerda, la longitud de onda observada es $(1 + z) \ \times$ la longitud de onda del marco en reposo, y Lyman-$\alpha$ es la transición 2-1 del hidrógeno que vimos en la introducción. Entonces, [OII] es único en el sentido de que, como doblete, es más probable que podamos distinguirlo de los singletes a diferentes desplazamientos al rojo. La segunda gran cosa es que es la segunda línea más fuerte emitida por estrellas jóvenes (la primera es H-$\alpha$), como en las nebulosas de Orión, una imagen icónica de la formación de estrellas: Las galaxias con alto corrimiento al rojo son estrellas más jóvenes y en formación más activa, por lo que emiten gran cantidad de [OII]. Entonces, a medida que miramos más lejos, es más probable que veamos emisores de OII. Como estas galaxias están tan lejos, sería muy difícil detectar algo tan débil si no fuera por esta emisión de OII: ###Code zs = np.arange(0.01, 1.7, 0.01) lumdists = Planck15.luminosity_distance(zs) faints = (lumdists / lumdists[0])**2. pl.xlabel(r'$z$') pl.ylabel('Debilidad relativa a la galaxia en tu regazo') pl.semilogy(zs, faints) ###Output _____no_output_____ ###Markdown A $z=0.25$, una galaxia es 1000 veces más débil de lo que sería en tu regazo. Para $z=1.75$, el ELG más lejano detectado por DESI, es 10,000 veces más débil (cuanto más débil depende de si hay Energía Oscura en el Universo; aquí asumimos el ~ 70% que aprendimos en la Introducción). [Astropy](https://docs.astropy.org/en/stable/index.html) hace que esto sea realmente fácil de entender, pero sería mucho mejor entender cómo llegar allí. Para tener una idea, intenta [aquí](https://in-the-sky.org/article.php?term=cosmological_distance). Entonces, queremos galaxias de línea de emisión (ELG) con un doblete OII. Será mejor que nos aseguremos de que nuestro telescopio y nuestro instrumento para dispersar la luz sean capaces de detectar y "resolver" esta débil señal. Fundamentalmente, nuestro instrumento debe estar diseñado para asegurar que el doblete no sea borroso, ya que esto convertiría el doblete en un singlete y conduciría a la misma confusión que nos gustaría evitar. La pregunta es, ¿cómo deberíamos hacer esto? ¿Sería un simple [prisma](https://es.wikipedia.org/wiki/Prisma_(óptica)) de laboratorio suficiente? La respuesta es no, el prisma tendría que ser demasiado grande y perder demasiada luz para lograr la dispersión (separación entre colores) requerida. Necesitamos algo más avanzado, una rejilla, que pueda dispersar la luz debido a la difracción (o reflexión) y la interferencia causada por una serie de rendijas grabadas en metal (con diamante). Consulta [aquí](https://es.wikipedia.org/wiki/Red_de_difracción) para obtener más detalles. De hecho, DESI usa una rejilla especial que cambia el [índice de refracción](https://es.wikipedia.org/wiki/Índice_de_refracción) del vidrio, miles de veces por milímetro, para lograr el mismo [efecto](https:arxiv.org/pdf/1611.00037.pdf): Grabar estas líneas es costoso, por lo que debemos minimizar la cantidad que necesitamos. No desperdiciarías dinero de tu bolsillo, ¿verdad? Entonces, ¿qué resolución _necesitamos_ para hacer ciencia con galaxias de línea de emisión (OII)? ¿Y qué significa eso para el instrumento que necesitamos construir? La resolución $R$ se define como $(\Delta \lambda /\lambda)$, donde $\Delta \lambda$ es el ancho efectivo de una línea (gaussiana). Entonces, a medida que la resolución instrumental disminuye, nuestras líneas observadas se amplían: ###Code def dlamba_inst(R, z, center): # ecuación (2) de https://arxiv.org/pdf/1310.0615.pdf return (1. + z) * center / R # [Angstroms] fig, ax = plt.subplots(1, 1, figsize=(10,10)) for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_inst(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') ax.set_xlabel('Longitud de onda [Angstroms]') ax.set_ylabel('Flujo$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown Entonces, ¿una resolución de $R=50,000$ tendría sentido para DESI? No, ya que la línea sería más ancha debido, simplemente, a la velocidad térmica del gas emisor en la galaxia. Veamos esto. Si tenemos el ensanchamiento correcto debido tanto a la velocidad de dispersión del gas emisor, como al instrumento, el ancho se satura sin importar la resolución instrumental: ###Code def dlamba_tot(R, z, center, v=1.e-4): # Anchuras de las Gausianas sumadas en cuadraturas; (https://es.wikipedia.org/wiki/Propagación_de_errores). return np.sqrt(dlamba_inst(R, z, center)**2. + width(center, v)**2.) fig, ax = plt.subplots(1, 1, figsize=(10,10)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_tot(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.set_xlabel('Longitud de onda [Angstroms]') ax.set_ylabel('Flujo$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown ¡Entonces pueden ver que con un instrumento insuficiente, [OII] se volverá borroso y totalmente inútil para nosotros! Pero necesitamos saber qué es lo suficientemente bueno. Intentemos. La resolución $R$ define el elemento de resolución como $R= (\lambda / \Delta \lambda)$, como se indicó anteriormente, para una galaxia con desplazamiento al rojo $z$, por ejemplo: ###Code R = 9.e3 z = 1.00 ###Output _____no_output_____ ###Markdown dando el ancho de un elemento de resolución como ###Code dlambda = OII * (1 + z) / R # [Angstroms]. ###Output _____no_output_____ ###Markdown Un famoso [teorema](https://es.wikipedia.org/wiki/Teorema_de_muestreo_de_Nyquist-Shannon) - que por cierto es un punto de entrada a la [Teoría de la información](https://es.wikipedia.org/wiki/Teor%C3%ADa_de_la_información) y el mundo digital, nos dice que necesitamos muestrear un elemento de resolución al menos _dos veces_ para reconstruir con precisión la función (de paso de banda limitado) que muestrea. Para estar seguros, lo muestraremos tres veces, dado un ancho de píxel de 1/3 del ancho del elemento de resolución: ###Code # Ancho de un pixel en Angstroms, en lugar del elemento de resolución. dlambda /= 3. # Hagamos coincidir la longitud de onda con la malla de píxeles. wave = np.arange(3600, 1.e4, dlambda) ###Output _____no_output_____ ###Markdown Ahora, el Telescopio Mayall utilizado por DESI tiene un espejo (primario) de 3,8 m de diámetro, por lo que tiene un área de ###Code # Área del espejo primario, circular, de DESI. Area = np.pi * (3.8 / 2.)**2. # [m] to [cm]. Area *= 1.e4 Area # [cm^2] ###Output _____no_output_____ ###Markdown con este espejo suavemente curvado para enfocar la luz a un punto en el [foco](https://en.wikipedia.org/wiki/Cassegrain_reflector), con una distancia focal de 10,7 m. Cuando DESI apunta al cielo, capta instantáneamente de la luz por 5000 fibras individuales a la vez. Puedes ver 500 en un "pétalo" en forma de cuña debajo Cada fibra tiene un diámetro $w=107 \mu m$ o $10^{-4}m$ y 10 de los pétalos anteriores pueblan el plano focal DESI. Con la distancia focal de $f_{\rm{M1}} = 10.7$m, cada fibra recibe luz de un parche circular en el cielo de $\theta \simeq (w/2) \ / \ f_{\rm{M1}}$. ###Code # Radio angular de la fibra, en lugar del diámetro. theta = 107e-6 / 2 / 10.7 # [radianes] theta *= 180. / np.pi # [grados] theta *= 60. * 60. # [segundos de arco] theta # [segundos de arco] ###Output _____no_output_____ ###Markdown En realidad, la 'escala de la placa' varía de tal manera que una mejor aproximación es 1,5 segundos de arco. ###Code theta = 1.5 # [segundos de arco] ###Output _____no_output_____ ###Markdown Cada fibra tiene un pequeño motor que puede viajar para observar cualquier galaxia dentro de cada círculo que se muestra: (Puedes ver más ejemplos con el [visor](https://www.legacysurvey.org/viewerIC%201229) ). La luz recibida por cada fibra se redirige a lo largo de una fibra óptica para finalmente aterrizar en un solo píxel de un CCD, donde cada fotón se convierte en un electrón por el [efecto fotoeléctrico](https://es.wikipedia.org/wiki/Efecto_fotoeléctrico): ¡uno de los primeros descubrimientos en Mecánica Cuántica hecho por Einstein!Nuestro primo cercano, el Dark Energy Survey, observa en un gemelo idéntico al Mayall en Chile y tiene algunos de los [CCD](https://www.darkenergysurvey.org/the-des-project/instrument/the-camera/) más bonitos alrededor (cada rectángulo). En total, se muestran sesenta y dos CCDS, con 2048 x 4096 píxeles cada uno, ¡para un total de 520 millones de píxeles! En comparación, los últimos iPhones tienen [12 millones de píxeles](https://www.iphonefaq.org/archives/976253). Ahora, el número de galaxias que necesitamos (17 millones de ELG) define la luminosidad de la línea (brillo de la cantidad) de [OII] que necesitamos alcanzar, ese es nuestro objetivo. ###Code line_flux = 8.e-17 # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown Hablemos de unidades. Un ergio es $10^{-7}$ Joules, por lo que es una cantidad muy pequeña de energía, en Joules, que llega por segundo, en un cm2. ###Code def model(wave, sigma, z, r=0.7): # Unidad de amplitud, sigma es la anchura de la línea, z es el redshift y r es la amplitud relativa de ls líneas en el doblete. return 1. / (1. + r) / np.sqrt(2. * np.pi) / sigma * (r * np.exp(- ((wave - lambdaa * (1. + z)) / np.sqrt(2.) / sigma)**2.) + np.exp(- ((wave - lambdab * (1. + z)) / np.sqrt(2.) / sigma)**2.)) width = dlamba_inst(R, z, lambdaa) profile = model(wave, width, z) # [1/Angstrom]. profile *= line_flux # [ergs/s/cm2/Angstrom]. profile *= dlambda # [ergs/s/cm2/pixel]. pl.clf() pl.plot(wave, profile) pl.xlabel('Longitud de onda[Angstroms]') pl.ylabel('Flujo [ergs/s/cm2/pixel]') pl.xlim((1. + z) * 3720., (1. + z) * 3740.) # Sumando sobre píxeles, da el flujo total en la línea de nuevo. np.sum(profile) # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown La energía de cada OII [fotón](https://es.wikipedia.org/wiki/Fotón) que recibimos se puede calcular usando $E=h \nu$, donde $h=6.626 \times 10^{-34} J \cdot s$ y la frecuencia está dada por $c = \nu \cdot \lambda$. ###Code c = 2.9979e8 * 1.e10 # [Angstrom/s]. nus = c / wave # [Hertz] = [s^{-1}]. Energys = 6.626e-34 * nus # [Joules] Energys *= 1.e7 # [ergs] ###Output _____no_output_____ ###Markdown Entonces, la galaxia emisora ​​de OII más débil que podríamos observar daría como resultado que cada píxel de DESI (en longitud de onda, 15 $\mu m$ en tamaño físico) reciba una cantidad de fotones por segundo dada por ###Code # ergs per ... a fotones per ... profile /= Energys # [photons/s/cm2/pixel]. # Fotones recibidos en un pixel de DESI por segundo (asumiendo que no hay pérdidas en las fibras). profile *= Area # [photons/s/pixel/M1]. # Número total de fotones recibidos por DESI desde la fuente. np.sum(profile) # [photons/s/M1] ###Output _____no_output_____ ###Markdown Ahora, la eficiencia cuántica de un CCD no es del 100%, por lo que cada fotón no produce un electrón. Más bien, se producen a razón de 60 electrones en 100 fotones (una eficiencia del 60%). ###Code QE = 0.6 profile *= QE # [electrons/s/pixel/M1]. ###Output _____no_output_____ ###Markdown Para contrarrestar esta ineficiencia tomamos una exposición que dura 15 minutos, durante los cuales los electrones se acumulan en los píxeles del CCD. ###Code exptime = 15. * 60. # [segundos] profile *= exptime # [electrones/exposición/pixel/M1] pl.plot(wave, profile) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposure/M1/pixel]') ###Output _____no_output_____ ###Markdown Pero hay otro pequeño problema. A medida que la luz de la galaxia viaja a través de la atmósfera, se agita de tal manera que aparece manchada en el cielo. El tamaño aparente (en segundos de arco) de una estrella que en realidad debería verse como un punto se debe a lo que se conoce como ["seeing"](https://es.wikipedia.org/wiki/Seeing). El golpeteo puede ser tan fuerte, dependiendo del clima, que la luz de las estrellas se puede perder en la fibra incluso si está centrada correctamente. Veamos esto. ###Code def moffatt(r, fwhm, beta=3.5): ## Perfil radial aparente de la luz de la estrella debido al golpeteo de la atmósfera. ## Sección 4 de https://iopscience.iop.org/article/10.1086/675808/pdf; [arcsecond]. alpha = fwhm / 2. / (2.**(1./beta) - 1.)**0.5 return (2. * (beta - 1.) / alpha / alpha) * (1. + (r/alpha)**2.)**-beta fwhm = 2.0 dr = 0.01 rs = np.arange(0.0, 15., dr) ## [arcseconds]. ms = moffatt(rs, fwhm) pl.axvline(theta, alpha=0.25, c='k') pl.plot(rs, ms, c='k') pl.xlabel('Distancia desde el centro de la estrella[arcseconds]') pl.ylabel('Brillo relativo aparente de la estrella') pl.xlim(left=-0.1, right=6.0) # Rango de valores de anchura-completa @ altura media para el seeing. fwhms = np.arange(0.5, 3.5, 0.1) # Encuentra el índice en la malla de distancia que es cercana al tamaño de una fibra indx = np.abs(rs - theta).argmin() # Una lista para colectar la fracción de luz que pasa por la fibra para cada valor del seeing. fiberfracs = [] # Ciclo sobre todos los valores del seeing. for i, fwhm in enumerate(fwhms): # Trabajamos el perfil radial de la estrella. ms = moffatt(rs, fwhm) # Integramos esto para obtener la luz total dentro un cierto radio Is = 2. * np.pi * dr * np.cumsum(rs * ms) # Calculamos la fracción de la fibra para cada valor de r que pedimos. ffrac = Is / Is[-1] #Guardamos la fracción de la fibra para el radio correspondiente al tamaño de la fibra. fiberfracs.append(ffrac[indx]) fiberfracs = np.array(fiberfracs) pl.plot(fwhms, fiberfracs) pl.xlim(0.5, 3.0) pl.xlabel(r'$(FWHM) \ Seeing \ [{\rm arcseconds}]$') pl.ylabel(r'FIBER FRAC.') ###Output _____no_output_____ ###Markdown Entonces, a medida que el aire (altamente) [turbulento](https://es.wikipedia.org/wiki/Turbulencia) se mueve en la atmósfera, la luz de la galaxia se difumina dependiendo del "seeing". Cuando esto empeora, $\simeq 3.0$ segundos de arco, ¡el 60% de la luz se puede perder! DESI necesita algo como un segundo de arco para observar, de lo contrario, simplemente tiramos los datos. Pero finalmente, esto significa que podemos esperar que el 80% de la luz se capture en una exposición normal: ###Code fiberfrac = 0.8 profile *= fiberfrac # [electrons/exposure/pixel/M1] ###Output _____no_output_____ ###Markdown Ahora, dependiendo de las fases de la luna, cada fibra colocada en una galaxia también recibe una cantidad de "fondo" de luz (lunar) que se origina a partir de la luz _dispersada_ por la atmósfera. Este trasfondo depende en gran medida de las fases de la luna; para los ELG, debemos evitar observar cerca de la luna llena. Nota al margen, con un diámetro angular aparente de $0.5$ grados, la luna encajaría en $\approx 6 \times$ lado a lado en el campo de visión DESI (3,2 grados de diámetro). Un nivel típico para la luz de fondo es 6.4e-18 erg / cm$^2/s/$Angstrom / sq. segundo de arco, con un área de fibra proyectada dada por ###Code fib_area = np.pi * theta**2. # [sq. arcsecond] fib_area ###Output _____no_output_____ ###Markdown El nivel de _fondo_ correspondiente de fotones recibidos por un píxel DESI por segundo (como antes): ###Code background = 3.4e-18 # [erg/s/cm 2/ Angstrom/sq. arcsecond]. background *= fib_area background # [erg/s/cm 2/ Angstrom]. ###Output _____no_output_____ ###Markdown que convertimos de la misma manera que antes: ###Code background /= Energys # [fotones/s/cm2/Angstrom]. background *= dlambda # [fotones/s/cm2/pixel]. # Fotones del fondo, recibidos por un pixel de DESI por segundo (asumiendo que no hay perdida en la fibra). background *= Area # [fotones/s/pixel/M1]. # Eficiencia background *= QE # [electrones/s/pixel/M1]. background *= exptime # [electrones/exposición/pixel/M1]. background ###Output _____no_output_____ ###Markdown El ruido de fondo es Poisson, en promedio esperamos un nivel de fondo de electrones, pero para cualquier exposición dada habrá fluctuaciones de acuerdo con una [distribución](https://en.wikipedia.org/wiki/Poisson_distribution) conocida. Suponiendo que el número de electrones medidos está dominado por el fondo, la varianza que esperamos en el número de electrones medidos es la de una distribución de Poisson: ###Code pixel_variance = background # [electrones/exposición/pixel/M1]. noise = [] for p in background: noise.append(np.random.poisson(p, 1)[0]) noise = np.array(noise) noise data = profile + noise pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, noise) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposición/M1/pixel]') ###Output _____no_output_____ ###Markdown DESI tiene fibras dedicadas que apuntan al cielo, en lugar de a las galaxias. Esto permite medir el fondo del cielo para restar el nivel promedio: ###Code data -= background pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposición/M1/pixel]') ###Output _____no_output_____ ###Markdown ¡Necesitamos establecer si esto es suficiente! Este será un ejercicio de ajuste, como en la Introducción. Definiremos una métrica de mejor ajuste: $$\chi^2 = \sum_p \left ( \frac{D_p - A \cdot M_p}{\sigma_p} \right )^2$$ que calcula la distancia cuadrada acumulada (ponderada por error) de los datos del modelo. Donde $A$ representa el flujo de línea, $M$ es el modelo que definimos anteriormente y $\sigma_p$ es la desviación estándar (dominada por el fondo) de los electrones en cada píxel. Si derivamos esto con respecto a $A$, encontramos el flujo de línea que mejor se ajusta (recuerde, la verdad se definió anteriormente). $A = \left (\sum_p D_p M_p / \sigma_p^2 \right ) / \left (\sum_p M_p^2 / \sigma_p^2 \right )$m o ###Code # Flujo de línea estimado Mp = model(wave, width, z) * dlambda # [ergs/s/cm2/pixel] Mp /= Energys # [fotones/s/cm2/pixel]. Mp *= Area # [fotones/s/pixel/M1]. Mp *= QE # [electrones/s/pixel/M1]. Mp *= exptime # [electrones/exposición/pixel/M1]. Mp *= fiberfrac # [electrones/exposición/pixel/M1]. pl.plot(wave, data) pl.plot(wave, Mp * line_flux) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de Onda [Angstroms]') pl.ylabel('Flujo[electrones/exposure/M1/pixel]') est_line_flux = np.sum(data * Mp / pixel_variance) / np.sum(Mp**2. / pixel_variance) est_line_flux ###Output _____no_output_____ ###Markdown ¡Increíble! Hemos podido medir el flujo de la línea de nuestra galaxia de línea de emisión. Ahora bien, ¿cuál es el error en nuestra medición? Puedes obtener esto de la segunda derivada de $\chi^2$, $\sigma_A^{-2} = \left ( \frac{1}{2} \right ) \frac{\partial^2 \chi^2}{\partial^2 A} = \sum_p \frac{M_p^2}{\sigma_p^2}$. ###Code varA = np.sum(Mp**2 / pixel_variance) sigA = 1. / np.sqrt(varA) sigA ###Output _____no_output_____ ###Markdown Dando una relación señal-ruido de (cuántas veces más grande es la 'señal' que el ruido), $SNR = A / \sigma_A$. ###Code SNR = est_line_flux / sigA print('Para un línea OII con flujo de línea de {:.3e}, con una resolución {:.3f}, el SNR es {:.3f}!'.format(line_flux, R, SNR)) ###Output _____no_output_____ ###Markdown OII y más galaxias Ya sea observando cada vez más distantes o recolectando aquellas que hemos perdido anteriormente, ¡nuestra ciencia siempre mejora con más galaxias! El problema es que ya hemos analizado todas las galaxias fáciles y brillantes y las cosas se ponen más difíciles a medida que nos vemos obligados a observar las galaxias más débiles que conocemos. Tenemos que ser inteligentes sobre cómo hacemos esto y, a veces, se presenta una oportunidad sorprendente ... _Advertencia: este cuaderno aumenta la dificultad para permitirnos diseñar experimentos más divertidos basados ​​en lo que aprenderás aquí. Si tienes algún problema, [pregunta](https://github.com/michaelJwilson/DESI-HighSchool/issues/new/choose). ¡Quédate con nosotros!_ ¿Estás cansado de escuchar a tus padres? Los átomos sienten lo mismo. Su vida es una serie de reglas, reglas, reglas. Haz esto, no hagas aquello; la [lista](https://es.wikipedia.org/wiki/Transición_electrónica) es larga. Pero a veces, se cansan y se rebelan, ![title](images/Climate.png) Resulta que una rebelión, de vez en cuando, puede ser algo bueno. Por ejemplo, el oxígeno (doblemente) ionizado o [OII], (increíblemente raramente) emite un doblete único que no lo haría [de otra manera](https://es.wikipedia.org/wiki/L%C3%ADnea_prohibida). Veamos qué pasa. ###Code # Longitudes de onda del doblete OII. lambdaa = 3727.092 # Angstroms lambdab = 3729.875 # Angstorms # Energía promedio ponderado. OII = 3728.483 # Anchura de cada línea debido al ensanchamiento térmico. def width(center, dv): # diferencia de velocidad [velocidad de la luz] return center * dv wave = np.arange(1000, 1.e4, 0.05) dlambdaa = width(lambdaa, 1.e-4) dlambdab = width(lambdab, 1.e-4) def gaussian(wave, center, width): # https://es.wikipedia.org/wiki/Función_gaussiana norm = np.sqrt(2. * np.pi / width) return np.exp(-0.5 * (wave - center)**2. / width**2.) ax = pl.gca() ax.fill_between(wave, 0., gaussian(wave, lambdaa, dlambdaa), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, lambdab, dlambdab), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, 3889.0, width(3889.0, 1.e-4)), color='k', alpha=1.) pl.xlim(3700., 3900.) pl.ylim(0.25, 1.1) pl.xlabel('Longitud de onda [AA]') pl.ylabel('Flujo normalizado') ###Output _____no_output_____ ###Markdown Primero, las transiciones _prohibidas_ [OII] (azul) representan un doblete de dos líneas poco espaciadas. Estas tienen un ancho finito ya que las estrellas emisoras se mueven (al 0.01% en este ejemplo), lo que lleva al ensanchamiento Doppler habitual. Contrasta esto con la línea negra He I, que es una sola línea o "singlete". El problema es que una sola línea emitida por una galaxia a un desplazamiento al rojo dado puede parecer una línea diferente a otro desplazamiento al rojo. Tu turno, si hubiera un emisor Lyman-$\alpha$ en $z=4.0$, ¿podrías notar la diferencia de un emisor H-$\alpha$ (6564.61 Angstroms) a un desplazamiento al rojo diferente? ¿Qué desplazamiento al rojo tendría esta segunda galaxia? Recuerda, la longitud de onda observada es $(1 + z) \ \times$ la longitud de onda del marco en reposo, y Lyman-$\alpha$ es la transición 2-1 del hidrógeno que vimos en la introducción. Entonces, [OII] es único en el sentido de que, como doblete, es más probable que podamos distinguirlo de los singletes a diferentes desplazamientos al rojo. La segunda gran cosa es que es la segunda línea más fuerte emitida por estrellas jóvenes (la primera es H-$\alpha$), como en las nebulosas de Orión, una imagen icónica de la formación de estrellas: Las galaxias con alto corrimiento al rojo son estrellas más jóvenes y en formación más activa, por lo que emiten gran cantidad de [OII]. Entonces, a medida que miramos más lejos, es más probable que veamos emisores de OII. Como estas galaxias están tan lejos, sería muy difícil detectar algo tan débil si no fuera por esta emisión de OII: ###Code zs = np.arange(0.01, 1.7, 0.01) lumdists = Planck15.luminosity_distance(zs) faints = (lumdists / lumdists[0])**2. pl.xlabel(r'$z$') pl.ylabel('Debilidad relativa a la galaxia en tu regazo') pl.semilogy(zs, faints) ###Output _____no_output_____ ###Markdown A $z=0.25$, una galaxia es 1000 veces más débil de lo que sería en tu regazo. Para $z=1.75$, el ELG más lejano detectado por DESI, es 10,000 veces más débil (cuanto más débil depende de si hay Energía Oscura en el Universo; aquí asumimos el ~ 70% que aprendimos en la Introducción). [Astropy](https://docs.astropy.org/en/stable/index.html) hace que esto sea realmente fácil de entender, pero sería mucho mejor entender cómo llegar allí. Para tener una idea, intenta [aquí](https://in-the-sky.org/article.php?term=cosmological_distance). Entonces, queremos galaxias de línea de emisión (ELG) con un doblete OII. Será mejor que nos aseguremos de que nuestro telescopio y nuestro instrumento para dispersar la luz sean capaces de detectar y "resolver" esta débil señal. Fundamentalmente, nuestro instrumento debe estar diseñado para asegurar que el doblete no sea borroso, ya que esto convertiría el doblete en un singlete y conduciría a la misma confusión que nos gustaría evitar. La pregunta es, ¿cómo deberíamos hacer esto? ¿Sería un simple [prisma](https://es.wikipedia.org/wiki/Prisma_(óptica)) de laboratorio suficiente? La respuesta es no, el prisma tendría que ser demasiado grande y perder demasiada luz para lograr la dispersión (separación entre colores) requerida. Necesitamos algo más avanzado, una rejilla, que pueda dispersar la luz debido a la difracción (o reflexión) y la interferencia causada por una serie de rendijas grabadas en metal (con diamante). Consulta [aquí](https://es.wikipedia.org/wiki/Red_de_difracción) para obtener más detalles. De hecho, DESI usa una rejilla especial que cambia el [índice de refracción](https://es.wikipedia.org/wiki/Índice_de_refracción) del vidrio, miles de veces por milímetro, para lograr el mismo [efecto](https:arxiv.org/pdf/1611.00037.pdf): Grabar estas líneas es costoso, por lo que debemos minimizar la cantidad que necesitamos. No desperdiciarías dinero de tu bolsillo, ¿verdad? Entonces, ¿qué resolución _necesitamos_ para hacer ciencia con galaxias de línea de emisión (OII)? ¿Y qué significa eso para el instrumento que necesitamos construir? La resolución $R$ se define como $(\Delta \lambda /\lambda)$, donde $\Delta \lambda$ es el ancho efectivo de una línea (gaussiana). Entonces, a medida que la resolución instrumental disminuye, nuestras líneas observadas se amplían: ###Code def dlamba_inst(R, z, center): # ecuación (2) de https://arxiv.org/pdf/1310.0615.pdf return (1. + z) * center / R # [Angstroms] fig, ax = plt.subplots(1, 1, figsize=(10,10)) for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_inst(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') ax.set_xlabel('Longitud de onda [Angstroms]') ax.set_ylabel('Flujo$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown Entonces, ¿una resolución de $R=50,000$ tendría sentido para DESI? No, ya que la línea sería más ancha debido, simplemente, a la velocidad térmica del gas emisor en la galaxia. Veamos esto. Si tenemos el ensanchamiento correcto debido tanto a la velocidad de dispersión del gas emisor, como al instrumento, el ancho se satura sin importar la resolución instrumental: ###Code def dlamba_tot(R, z, center, v=1.e-4): # Anchuras de las Gausianas sumadas en cuadraturas; (https://es.wikipedia.org/wiki/Propagación_de_errores). return np.sqrt(dlamba_inst(R, z, center)**2. + width(center, v)**2.) fig, ax = plt.subplots(1, 1, figsize=(10,10)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_tot(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.set_xlabel('Longitud de onda [Angstroms]') ax.set_ylabel('Flujo$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown ¡Entonces pueden ver que con un instrumento insuficiente, [OII] se volverá borroso y totalmente inútil para nosotros! Pero necesitamos saber qué es lo suficientemente bueno. Intentemos. La resolución $R$ define el elemento de resolución como $R= (\lambda / \Delta \lambda)$, como se indicó anteriormente, para una galaxia con desplazamiento al rojo $z$, por ejemplo: ###Code R = 9.e3 z = 1.00 ###Output _____no_output_____ ###Markdown dando el ancho de un elemento de resolución como ###Code dlambda = OII * (1 + z) / R # [Angstroms]. ###Output _____no_output_____ ###Markdown Un famoso [teorema](https://es.wikipedia.org/wiki/Teorema_de_muestreo_de_Nyquist-Shannon) - que por cierto es un punto de entrada a la [Teoría de la información](https://es.wikipedia.org/wiki/Teor%C3%ADa_de_la_información) y el mundo digital, nos dice que necesitamos muestrear un elemento de resolución al menos _dos veces_ para reconstruir con precisión la función (de paso de banda limitado) que muestrea. Para estar seguros, lo muestraremos tres veces, dado un ancho de píxel de 1/3 del ancho del elemento de resolución: ###Code # Ancho de un pixel en Angstroms, en lugar del elemento de resolución. dlambda /= 3. # Hagamos coincidir la longitud de onda con la malla de píxeles. wave = np.arange(3600, 1.e4, dlambda) ###Output _____no_output_____ ###Markdown Ahora, el Telescopio Mayall utilizado por DESI tiene un espejo (primario) de 3,8 m de diámetro, por lo que tiene un área de ###Code # Área del espejo primario, circular, de DESI. Area = np.pi * (3.8 / 2.)**2. # [m] to [cm]. Area *= 1.e4 Area # [cm^2] ###Output _____no_output_____ ###Markdown con este espejo suavemente curvado para enfocar la luz a un punto en el [foco](https://en.wikipedia.org/wiki/Cassegrain_reflector), con una distancia focal de 10,7 m. Cuando DESI apunta al cielo, capta instantáneamente de la luz por 5000 fibras individuales a la vez. Puedes ver 500 en un "pétalo" en forma de cuña debajo Cada fibra tiene un diámetro $w=107 \mu m$ o $10^{-4}m$ y 10 de los pétalos anteriores pueblan el plano focal DESI. Con la distancia focal de $f_{\rm{M1}} = 10.7$m, cada fibra recibe luz de un parche circular en el cielo de $\theta \simeq (w/2) \ / \ f_{\rm{M1}}$. ###Code # Radio angular de la fibra, en lugar del diámetro. theta = 107e-6 / 2 / 10.7 # [radianes] theta *= 180. / np.pi # [grados] theta *= 60. * 60. # [segundos de arco] theta # [segundos de arco] ###Output _____no_output_____ ###Markdown En realidad, la 'escala de la placa' varía de tal manera que una mejor aproximación es 1,5 segundos de arco. ###Code theta = 1.5 # [segundos de arco] ###Output _____no_output_____ ###Markdown Cada fibra tiene un pequeño motor que puede viajar para observar cualquier galaxia dentro de cada círculo que se muestra: (Puedes ver más ejemplos con el [visor](https://www.legacysurvey.org/viewerIC%201229) ). La luz recibida por cada fibra se redirige a lo largo de una fibra óptica para finalmente aterrizar en un solo píxel de un CCD, donde cada fotón se convierte en un electrón por el [efecto fotoeléctrico](https://es.wikipedia.org/wiki/Efecto_fotoeléctrico): ¡uno de los primeros descubrimientos en Mecánica Cuántica hecho por Einstein!Nuestro primo cercano, el Dark Energy Survey, observa en un gemelo idéntico al Mayall en Chile y tiene algunos de los [CCD](https://www.darkenergysurvey.org/the-des-project/instrument/the-camera/) más bonitos alrededor (cada rectángulo). En total, se muestran sesenta y dos CCDS, con 2048 x 4096 píxeles cada uno, ¡para un total de 520 millones de píxeles! En comparación, los últimos iPhones tienen [12 millones de píxeles](https://www.iphonefaq.org/archives/976253). Ahora, el número de galaxias que necesitamos (17 millones de ELG) define la luminosidad de la línea (brillo de la cantidad) de [OII] que necesitamos alcanzar, ese es nuestro objetivo. ###Code line_flux = 8.e-17 # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown Hablemos de unidades. Un ergio es $10^{-7}$ Joules, por lo que es una cantidad muy pequeña de energía, en Joules, que llega por segundo, en un cm2. ###Code def model(wave, sigma, z, r=0.7): # Unidad de amplitud, sigma es la anchura de la línea, z es el redshift y r es la amplitud relativa de ls líneas en el doblete. return 1. / (1. + r) / np.sqrt(2. * np.pi) / sigma * (r * np.exp(- ((wave - lambdaa * (1. + z)) / np.sqrt(2.) / sigma)**2.) + np.exp(- ((wave - lambdab * (1. + z)) / np.sqrt(2.) / sigma)**2.)) width = dlamba_inst(R, z, lambdaa) profile = model(wave, width, z) # [1/Angstrom]. profile *= line_flux # [ergs/s/cm2/Angstrom]. profile *= dlambda # [ergs/s/cm2/pixel]. pl.clf() pl.plot(wave, profile) pl.xlabel('Longitud de onda[Angstroms]') pl.ylabel('Flujo [ergs/s/cm2/pixel]') pl.xlim((1. + z) * 3720., (1. + z) * 3740.) # Sumando sobre píxeles, da el flujo total en la línea de nuevo. np.sum(profile) # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown La energía de cada OII [fotón](https://es.wikipedia.org/wiki/Fotón) que recibimos se puede calcular usando $E=h \nu$, donde $h=6.626 \times 10^{-34} J \cdot s$ y la frecuencia está dada por $c = \nu \cdot \lambda$. ###Code c = 2.9979e8 * 1.e10 # [Angstrom/s]. nus = c / wave # [Hertz] = [s^{-1}]. Energys = 6.626e-34 * nus # [Joules] Energys *= 1.e7 # [ergs] ###Output _____no_output_____ ###Markdown Entonces, la galaxia emisora ​​de OII más débil que podríamos observar daría como resultado que cada píxel de DESI (en longitud de onda, 15 $\mu m$ en tamaño físico) reciba una cantidad de fotones por segundo dada por ###Code # ergs per ... a fotones per ... profile /= Energys # [photons/s/cm2/pixel]. # Fotones recibidos en un pixel de DESI por segundo (asumiendo que no hay pérdidas en las fibras). profile *= Area # [photons/s/pixel/M1]. # Número total de fotones recibidos por DESI desde la fuente. np.sum(profile) # [photons/s/M1] ###Output _____no_output_____ ###Markdown Ahora, la eficiencia cuántica de un CCD no es del 100%, por lo que cada fotón no produce un electrón. Más bien, se producen a razón de 60 electrones en 100 fotones (una eficiencia del 60%). ###Code QE = 0.6 profile *= QE # [electrons/s/pixel/M1]. ###Output _____no_output_____ ###Markdown Para contrarrestar esta ineficiencia tomamos una exposición que dura 15 minutos, durante los cuales los electrones se acumulan en los píxeles del CCD. ###Code exptime = 15. * 60. # [segundos] profile *= exptime # [electrones/exposición/pixel/M1] pl.plot(wave, profile) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposure/M1/pixel]') ###Output _____no_output_____ ###Markdown Pero hay otro pequeño problema. A medida que la luz de la galaxia viaja a través de la atmósfera, se agita de tal manera que aparece manchada en el cielo. El tamaño aparente (en segundos de arco) de una estrella que en realidad debería verse como un punto se debe a lo que se conoce como ["seeing"](https://es.wikipedia.org/wiki/Seeing). El golpeteo puede ser tan fuerte, dependiendo del clima, que la luz de las estrellas se puede perder en la fibra incluso si está centrada correctamente. Veamos esto. ###Code def moffatt(r, fwhm, beta=3.5): ## Perfil radial aparente de la luz de la estrella debido al golpeteo de la atmósfera. ## Sección 4 de https://iopscience.iop.org/article/10.1086/675808/pdf; [arcsecond]. alpha = fwhm / 2. / (2.**(1./beta) - 1.)**0.5 return (2. * (beta - 1.) / alpha / alpha) * (1. + (r/alpha)**2.)**-beta fwhm = 2.0 dr = 0.01 rs = np.arange(0.0, 15., dr) ## [arcseconds]. ms = moffatt(rs, fwhm) pl.axvline(theta, alpha=0.25, c='k') pl.plot(rs, ms, c='k') pl.xlabel('Distancia desde el centro de la estrella[arcseconds]') pl.ylabel('Brillo relativo aparente de la estrella') pl.xlim(left=-0.1, right=6.0) # Rango de valores de anchura-completa @ altura media para el seeing. fwhms = np.arange(0.5, 3.5, 0.1) # Encuentra el índice en la malla de distancia que es cercana al tamaño de una fibra indx = np.abs(rs - theta).argmin() # Una lista para colectar la fracción de luz que pasa por la fibra para cada valor del seeing. fiberfracs = [] # Ciclo sobre todos los valores del seeing. for i, fwhm in enumerate(fwhms): # Trabajamos el perfil radial de la estrella. ms = moffatt(rs, fwhm) # Integramos esto para obtener la luz total dentro un cierto radio Is = 2. * np.pi * dr * np.cumsum(rs * ms) # Calculamos la fracción de la fibra para cada valor de r que pedimos. ffrac = Is / Is[-1] #Guardamos la fracción de la fibra para el radio correspondiente al tamaño de la fibra. fiberfracs.append(ffrac[indx]) fiberfracs = np.array(fiberfracs) pl.plot(fwhms, fiberfracs) pl.xlim(0.5, 3.0) pl.xlabel(r'$(FWHM) \ Seeing \ [{\rm arcseconds}]$') pl.ylabel(r'FIBER FRAC.') ###Output _____no_output_____ ###Markdown Entonces, a medida que el aire (altamente) [turbulento](https://es.wikipedia.org/wiki/Turbulencia) se mueve en la atmósfera, la luz de la galaxia se difumina dependiendo del "seeing". Cuando esto empeora, $\simeq 3.0$ segundos de arco, ¡el 60% de la luz se puede perder! DESI necesita algo como un segundo de arco para observar, de lo contrario, simplemente tiramos los datos. Pero finalmente, esto significa que podemos esperar que el 80% de la luz se capture en una exposición normal: ###Code fiberfrac = 0.8 profile *= fiberfrac # [electrons/exposure/pixel/M1] ###Output _____no_output_____ ###Markdown Ahora, dependiendo de las fases de la luna, cada fibra colocada en una galaxia también recibe una cantidad de "fondo" de luz (lunar) que se origina a partir de la luz _dispersada_ por la atmósfera. Este trasfondo depende en gran medida de las fases de la luna; para los ELG, debemos evitar observar cerca de la luna llena. Nota al margen, con un diámetro angular aparente de $0.5$ grados, la luna encajaría en $\approx 6 \times$ lado a lado en el campo de visión DESI (3,2 grados de diámetro). Un nivel típico para la luz de fondo es 6.4e-18 erg / cm$^2/s/$Angstrom / sq. segundo de arco, con un área de fibra proyectada dada por ###Code fib_area = np.pi * theta**2. # [sq. arcsecond] fib_area ###Output _____no_output_____ ###Markdown El nivel de _fondo_ correspondiente de fotones recibidos por un píxel DESI por segundo (como antes): ###Code background = 3.4e-18 # [erg/s/cm 2/ Angstrom/sq. arcsecond]. background *= fib_area background # [erg/s/cm 2/ Angstrom]. ###Output _____no_output_____ ###Markdown que convertimos de la misma manera que antes: ###Code background /= Energys # [fotones/s/cm2/Angstrom]. background *= dlambda # [fotones/s/cm2/pixel]. # Fotones del fondo, recibidos por un pixel de DESI por segundo (asumiendo que no hay perdida en la fibra). background *= Area # [fotones/s/pixel/M1]. # Eficiencia background *= QE # [electrones/s/pixel/M1]. background *= exptime # [electrones/exposición/pixel/M1]. background ###Output _____no_output_____ ###Markdown El ruido de fondo es Poisson, en promedio esperamos un nivel de fondo de electrones, pero para cualquier exposición dada habrá fluctuaciones de acuerdo con una [distribución](https://en.wikipedia.org/wiki/Poisson_distribution) conocida. Suponiendo que el número de electrones medidos está dominado por el fondo, la varianza que esperamos en el número de electrones medidos es la de una distribución de Poisson: ###Code pixel_variance = background # [electrones/exposición/pixel/M1]. noise = [] for p in background: noise.append(np.random.poisson(p, 1)[0]) noise = np.array(noise) noise data = profile + noise pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, noise) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposición/M1/pixel]') ###Output _____no_output_____ ###Markdown DESI tiene fibras dedicadas que apuntan al cielo, en lugar de a las galaxias. Esto permite medir el fondo del cielo para restar el nivel promedio: ###Code data -= background pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposición/M1/pixel]') ###Output _____no_output_____ ###Markdown ¡Necesitamos establecer si esto es suficiente! Este será un ejercicio de ajuste, como en la Introducción. Definiremos una métrica de mejor ajuste: $$\chi^2 = \sum_p \left ( \frac{D_p - A \cdot M_p}{\sigma_p} \right )^2$$ que calcula la distancia cuadrada acumulada (ponderada por error) de los datos del modelo. Donde $A$ representa el flujo de línea, $M$ es el modelo que definimos anteriormente y $\sigma_p$ es la desviación estándar (dominada por el fondo) de los electrones en cada píxel. Si derivamos esto con respecto a $A$, encontramos el flujo de línea que mejor se ajusta (recuerde, la verdad se definió anteriormente). $A = \left (\sum_p D_p M_p / \sigma_p^2 \right ) / \left (\sum_p M_p^2 / \sigma_p^2 \right )$m o ###Code # Flujo de línea estimado Mp = model(wave, width, z) * dlambda # [ergs/s/cm2/pixel] Mp /= Energys # [fotones/s/cm2/pixel]. Mp *= Area # [fotones/s/pixel/M1]. Mp *= QE # [electrones/s/pixel/M1]. Mp *= exptime # [electrones/exposición/pixel/M1]. Mp *= fiberfrac # [electrones/exposición/pixel/M1]. pl.plot(wave, data) pl.plot(wave, Mp * line_flux) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de Onda [Angstroms]') pl.ylabel('Flujo[electrones/exposure/M1/pixel]') est_line_flux = np.sum(data * Mp / pixel_variance) / np.sum(Mp**2. / pixel_variance) est_line_flux ###Output _____no_output_____ ###Markdown ¡Increíble! Hemos podido medir el flujo de la línea de nuestra galaxia de línea de emisión. Ahora bien, ¿cuál es el error en nuestra medición? Puedes obtener esto de la segunda derivada de $\chi^2$, $\sigma_A^{-2} = \left ( \frac{1}{2} \right ) \frac{\partial^2 \chi^2}{\partial^2 A} = \sum_p \frac{M_p^2}{\sigma_p^2}$. ###Code varA = np.sum(Mp**2 / pixel_variance) sigA = 1. / np.sqrt(varA) sigA ###Output _____no_output_____ ###Markdown Dando una relación señal-ruido de (cuántas veces más grande es la 'señal' que el ruido), $SNR = A / \sigma_A$. ###Code SNR = est_line_flux / sigA print('Para un línea OII con flujo de línea de {:.3e}, con una resolución {:.3f}, el SNR es {:.3f}!'.format(line_flux, R, SNR)) ###Output Para un línea OII con flujo de línea de 8.000e-17, con una resolución 9000.000, el SNR es 26.074! ###Markdown OII y más galaxias Ya sea observando cada vez más distantes o recolectando aquellas que hemos perdido anteriormente, ¡nuestra ciencia siempre mejora con más galaxias! El problema es que ya hemos analizado todas las galaxias fáciles y brillantes y las cosas se ponen más difíciles a medida que nos vemos obligados a observar las galaxias más débiles que conocemos. Tenemos que ser inteligentes sobre cómo hacemos esto y, a veces, se presenta una oportunidad sorprendente ... _Advertencia: este cuaderno aumenta la dificultad para permitirnos diseñar experimentos más divertidos basados ​​en lo que aprenderás aquí. Si tienes algún problema, [pregunta](https://github.com/michaelJwilson/DESI-HighSchool/issues/new/choose). ¡Quédate con nosotros!_ ¿Estás cansado de escuchar a tus padres? Los átomos sienten lo mismo. Su vida es una serie de reglas, reglas, reglas. Haz esto, no hagas aquello; la [lista](https://es.wikipedia.org/wiki/Transición_electrónica) es larga. Pero a veces, se cansan y se rebelan, ![title](../desihigh/images/Climate.png) Resulta que una rebelión, de vez en cuando, puede ser algo bueno. Por ejemplo, el oxígeno (doblemente) ionizado o [OII], (increíblemente raramente) emite un doblete único que no lo haría [de otra manera](https://es.wikipedia.org/wiki/L%C3%ADnea_prohibida). Veamos qué pasa. ###Code # Longitudes de onda del doblete OII. lambdaa = 3727.092 # Angstroms lambdab = 3729.875 # Angstorms # Energía promedio ponderado. OII = 3728.483 # Anchura de cada línea debido al ensanchamiento térmico. def width(center, dv): # diferencia de velocidad [velocidad de la luz] return center * dv wave = np.arange(1000, 1.e4, 0.05) dlambdaa = width(lambdaa, 1.e-4) dlambdab = width(lambdab, 1.e-4) def gaussian(wave, center, width): # https://es.wikipedia.org/wiki/Función_gaussiana norm = np.sqrt(2. * np.pi / width) return np.exp(-0.5 * (wave - center)**2. / width**2.) ax = pl.gca() ax.fill_between(wave, 0., gaussian(wave, lambdaa, dlambdaa), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, lambdab, dlambdab), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, 3889.0, width(3889.0, 1.e-4)), color='k', alpha=1.) pl.xlim(3700., 3900.) pl.ylim(0.25, 1.1) pl.xlabel('Longitud de onda [AA]') pl.ylabel('Flujo normalizado') ###Output _____no_output_____ ###Markdown Primero, las transiciones _prohibidas_ [OII] (azul) representan un doblete de dos líneas poco espaciadas. Estas tienen un ancho finito ya que las estrellas emisoras se mueven (al 0.01% en este ejemplo), lo que lleva al ensanchamiento Doppler habitual. Contrasta esto con la línea negra He I, que es una sola línea o "singlete". El problema es que una sola línea emitida por una galaxia a un desplazamiento al rojo dado puede parecer una línea diferente a otro desplazamiento al rojo. Tu turno, si hubiera un emisor Lyman-$\alpha$ en $z=4.0$, ¿podrías notar la diferencia de un emisor H-$\alpha$ (6564.61 Angstroms) a un desplazamiento al rojo diferente? ¿Qué desplazamiento al rojo tendría esta segunda galaxia? Recuerda, la longitud de onda observada es $(1 + z) \ \times$ la longitud de onda del marco en reposo, y Lyman-$\alpha$ es la transición 2-1 del hidrógeno que vimos en la introducción. Entonces, [OII] es único en el sentido de que, como doblete, es más probable que podamos distinguirlo de los singletes a diferentes desplazamientos al rojo. La segunda gran cosa es que es la segunda línea más fuerte emitida por estrellas jóvenes (la primera es H-$\alpha$), como en las nebulosas de Orión, una imagen icónica de la formación de estrellas: Las galaxias con alto corrimiento al rojo contienen estrellas más jóvenes y en formación más activa, por lo que emiten gran cantidad de [OII]. Entonces, a medida que miramos más lejos, es más probable que veamos emisores de OII. Como estas galaxias están tan lejos, sería muy difícil detectar algo tan débil si no fuera por esta emisión de OII: ###Code zs = np.arange(0.01, 1.7, 0.01) lumdists = Planck15.luminosity_distance(zs) faints = (lumdists / lumdists[0])**2. pl.xlabel(r'$z$') pl.ylabel('Debilidad relativa a la galaxia en tu regazo') pl.semilogy(zs, faints) ###Output _____no_output_____ ###Markdown A $z=0.25$, una galaxia es 1000 veces más débil de lo que sería en tu regazo. Para $z=1.75$, el ELG más lejano detectado por DESI, es 10,000 veces más débil (cuanto más débil depende de si hay Energía Oscura en el Universo; aquí asumimos el ~ 70% que aprendimos en la Introducción). [Astropy](https://docs.astropy.org/en/stable/index.html) hace que esto sea realmente fácil de entender, pero sería mucho mejor entender cómo llegar allí. Para tener una idea, intenta [aquí](https://in-the-sky.org/article.php?term=cosmological_distance). Entonces, queremos galaxias de línea de emisión (ELG) con un doblete OII. Será mejor que nos aseguremos de que nuestro telescopio y nuestro instrumento para dispersar la luz sean capaces de detectar y "resolver" esta débil señal. Fundamentalmente, nuestro instrumento debe estar diseñado para asegurar que el doblete no sea borroso, ya que esto convertiría el doblete en un singlete y conduciría a la misma confusión que nos gustaría evitar. La pregunta es, ¿cómo deberíamos hacer esto? ¿Sería un simple [prisma](https://es.wikipedia.org/wiki/Prisma_(óptica)) de laboratorio suficiente? La respuesta es no, el prisma tendría que ser demasiado grande y perder demasiada luz para lograr la dispersión (separación entre colores) requerida. Necesitamos algo más avanzado, una rejilla, que pueda dispersar la luz debido a la difracción (o reflexión) y la interferencia causada por una serie de rendijas grabadas en metal (con diamante). Consulta [aquí](https://es.wikipedia.org/wiki/Red_de_difracción) para obtener más detalles. De hecho, DESI usa una rejilla especial que cambia el [índice de refracción](https://es.wikipedia.org/wiki/Índice_de_refracción) del vidrio, miles de veces por milímetro, para lograr el mismo [efecto](https:arxiv.org/pdf/1611.00037.pdf): Grabar estas líneas es costoso, por lo que debemos minimizar la cantidad que necesitamos. No desperdiciarías dinero de tu bolsillo, ¿verdad? Entonces, ¿qué resolución _necesitamos_ para hacer ciencia con galaxias de línea de emisión (OII)? ¿Y qué significa eso para el instrumento que necesitamos construir? La resolución $R$ se define como $(\Delta \lambda /\lambda)$, donde $\Delta \lambda$ es el ancho efectivo de una línea (gaussiana). Entonces, a medida que la resolución instrumental disminuye, nuestras líneas observadas se amplían: ###Code def dlamba_inst(R, z, center): # ecuación (2) de https://arxiv.org/pdf/1310.0615.pdf return (1. + z) * center / R # [Angstroms] fig, ax = plt.subplots(1, 1, figsize=(10,10)) for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_inst(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') ax.set_xlabel('Longitud de onda [Angstroms]') ax.set_ylabel('Flujo$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown Entonces, ¿una resolución de $R=50,000$ tendría sentido para DESI? No, ya que la línea sería más ancha debido, simplemente, a la velocidad térmica del gas emisor en la galaxia. Veamos esto. Si tenemos el ensanchamiento correcto debido tanto a la velocidad de dispersión del gas emisor, como al instrumento, el ancho se satura sin importar la resolución instrumental: ###Code def dlamba_tot(R, z, center, v=1.e-4): # Anchuras de las Gausianas sumadas en cuadraturas; (https://es.wikipedia.org/wiki/Propagación_de_errores). return np.sqrt(dlamba_inst(R, z, center)**2. + width(center, v)**2.) fig, ax = plt.subplots(1, 1, figsize=(10,10)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_tot(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.set_xlabel('Longitud de onda [Angstroms]') ax.set_ylabel('Flujo$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown ¡Entonces pueden ver que con un instrumento insuficiente, [OII] se volverá borroso y totalmente inútil para nosotros! Pero necesitamos saber qué es lo suficientemente bueno. Intentemos. La resolución $R$ define el elemento de resolución como $R= (\lambda / \Delta \lambda)$, como se indicó anteriormente, para una galaxia con desplazamiento al rojo $z$, por ejemplo: ###Code R = 9.e3 z = 1.00 ###Output _____no_output_____ ###Markdown dando el ancho de un elemento de resolución como ###Code dlambda = OII * (1 + z) / R # [Angstroms]. ###Output _____no_output_____ ###Markdown Un famoso [teorema](https://es.wikipedia.org/wiki/Teorema_de_muestreo_de_Nyquist-Shannon) - que por cierto es un punto de entrada a la [Teoría de la información](https://es.wikipedia.org/wiki/Teor%C3%ADa_de_la_información) y el mundo digital, nos dice que necesitamos muestrear un elemento de resolución al menos _dos veces_ para reconstruir con precisión la función (de paso de banda limitado) que muestrea. Para estar seguros, lo muestraremos tres veces, dado un ancho de píxel de 1/3 del ancho del elemento de resolución: ###Code # Ancho de un pixel en Angstroms, en lugar del elemento de resolución. dlambda /= 3. # Hagamos coincidir la longitud de onda con la malla de píxeles. wave = np.arange(3600, 1.e4, dlambda) ###Output _____no_output_____ ###Markdown Ahora, el Telescopio Mayall utilizado por DESI tiene un espejo (primario) de 3,8 m de diámetro, por lo que tiene un área de ###Code # Área del espejo primario, circular, de DESI. Area = np.pi * (3.8 / 2.)**2. # [m] to [cm]. Area *= 1.e4 Area # [cm^2] ###Output _____no_output_____ ###Markdown con este espejo suavemente curvado para enfocar la luz a un punto en el [foco](https://en.wikipedia.org/wiki/Cassegrain_reflector), con una distancia focal de 10,7 m. Cuando DESI apunta al cielo, capta instantáneamente de la luz por 5000 fibras individuales a la vez. Puedes ver 500 en un "pétalo" en forma de cuña debajo Cada fibra tiene un diámetro $w=107 \mu m$ o $10^{-4}m$ y 10 de los pétalos anteriores pueblan el plano focal DESI. Con la distancia focal de $f_{\rm{M1}} = 10.7$m, cada fibra recibe luz de un parche circular en el cielo de $\theta \simeq (w/2) \ / \ f_{\rm{M1}}$. ###Code # Radio angular de la fibra, en lugar del diámetro. theta = 107e-6 / 2 / 10.7 # [radianes] theta *= 180. / np.pi # [grados] theta *= 60. * 60. # [segundos de arco] theta # [segundos de arco] ###Output _____no_output_____ ###Markdown En realidad, la 'escala de la placa' varía de tal manera que una mejor aproximación es 1,5 segundos de arco. ###Code theta = 1.5 # [segundos de arco] ###Output _____no_output_____ ###Markdown Cada fibra tiene un pequeño motor que puede viajar para observar cualquier galaxia dentro de cada círculo que se muestra: (Puedes ver más ejemplos con el [visor](https://www.legacysurvey.org/viewerIC%201229) ). La luz recibida por cada fibra se redirige a lo largo de una fibra óptica para finalmente aterrizar en un solo píxel de una cámara CCD, donde cada fotón se convierte en un electrón por el [efecto fotoeléctrico](https://es.wikipedia.org/wiki/Efecto_fotoeléctrico): ¡uno de los primeros descubrimientos en Mecánica Cuántica hecho por Einstein!Nuestro primo cercano, el Dark Energy Survey, observa en un gemelo idéntico al Mayall en Chile y tiene algunos de los [CCD](https://www.darkenergysurvey.org/the-des-project/instrument/the-camera/) más bonitos alrededor (cada rectángulo). En total, se muestran sesenta y dos CCDS, con 2048 x 4096 píxeles cada uno, ¡para un total de 520 millones de píxeles! En comparación, los últimos iPhones tienen [12 millones de píxeles](https://www.iphonefaq.org/archives/976253). Ahora, el número de galaxias que necesitamos (17 millones de ELG) define la luminosidad de la línea (brillo de la cantidad) de [OII] que necesitamos alcanzar, ese es nuestro objetivo. ###Code line_flux = 8.e-17 # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown Hablemos de unidades. Un ergio es $10^{-7}$ Joules, por lo que es una cantidad muy pequeña de energía, en Joules, que llega por segundo, en un $cm^2$. ###Code def model(wave, sigma, z, r=0.7): # Unidad de amplitud, sigma es la anchura de la línea, z es el redshift y r es la amplitud relativa de ls líneas en el doblete. return 1. / (1. + r) / np.sqrt(2. * np.pi) / sigma * (r * np.exp(- ((wave - lambdaa * (1. + z)) / np.sqrt(2.) / sigma)**2.) + np.exp(- ((wave - lambdab * (1. + z)) / np.sqrt(2.) / sigma)**2.)) width = dlamba_inst(R, z, lambdaa) profile = model(wave, width, z) # [1/Angstrom]. profile *= line_flux # [ergs/s/cm2/Angstrom]. profile *= dlambda # [ergs/s/cm2/pixel]. pl.clf() pl.plot(wave, profile) pl.xlabel('Longitud de onda[Angstroms]') pl.ylabel('Flujo [ergs/s/cm2/pixel]') pl.xlim((1. + z) * 3720., (1. + z) * 3740.) # Sumando sobre píxeles, da el flujo total en la línea de nuevo. np.sum(profile) # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown La energía de cada OII [fotón](https://es.wikipedia.org/wiki/Fotón) que recibimos se puede calcular usando $E=h \nu$, donde $h=6.626 \times 10^{-34} J \cdot s$ y la frecuencia está dada por $c = \nu \cdot \lambda$. ###Code c = 2.9979e8*1.e10 # [Angstrom/s]. nus = c/wave # [Hertz] = [s^{-1}]. Energys = 6.626e-34 * nus # [Joules] Energys *= 1.e7 # [ergs] ###Output _____no_output_____ ###Markdown Entonces, la galaxia emisora ​​de OII más débil que podríamos observar daría como resultado que cada píxel de DESI (en longitud de onda, 15 $\mu m$ en tamaño físico) reciba una cantidad de fotones por segundo dada por ###Code # ergs per ... a fotones per ... profile /= Energys # [photons/s/cm2/pixel]. # Fotones recibidos en un pixel de DESI por segundo (asumiendo que no hay pérdidas en las fibras). profile *= Area # [photons/s/pixel/M1]. # Número total de fotones recibidos por DESI desde la fuente. np.sum(profile) # [photons/s/M1] ###Output _____no_output_____ ###Markdown Ahora, la eficiencia cuántica de un CCD no es del 100%, por lo que cada fotón no produce un electrón. Más bien, se producen a razón de 60 electrones en 100 fotones (una eficiencia del 60%). ###Code QE = 0.6 profile *= QE # [electrons/s/pixel/M1]. ###Output _____no_output_____ ###Markdown Para contrarrestar esta ineficiencia tomamos una exposición que dura 15 minutos, durante los cuales los electrones se acumulan en los píxeles del CCD. ###Code exptime = 15. * 60. # [segundos] profile *= exptime # [electrones/exposición/pixel/M1] pl.plot(wave, profile) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposure/M1/pixel]') ###Output _____no_output_____ ###Markdown Pero hay otro pequeño problema. A medida que la luz de la galaxia viaja a través de la atmósfera, se agita de tal manera que aparece manchada en el cielo. El tamaño aparente (en segundos de arco) de una estrella que en realidad debería verse como un punto se debe a lo que se conoce como ["seeing"](https://es.wikipedia.org/wiki/Seeing). El golpeteo puede ser tan fuerte, dependiendo del clima, que la luz de las estrellas se puede perder en la fibra incluso si está centrada correctamente. Veamos esto. ###Code def moffatt(r, fwhm, beta=3.5): ## Perfil radial aparente de la luz de la estrella debido al golpeteo de la atmósfera. ## Sección 4 de https://iopscience.iop.org/article/10.1086/675808/pdf; [arcsecond]. alpha = fwhm / 2. / (2.**(1./beta) - 1.)**0.5 return (2. * (beta - 1.) / alpha / alpha) * (1. + (r/alpha)**2.)**-beta fwhm = 2.0 dr = 0.01 rs = np.arange(0.0, 15., dr) ## [arcseconds]. ms = moffatt(rs, fwhm) pl.axvline(theta, alpha=0.25, c='k') pl.plot(rs, ms, c='k') pl.xlabel('Distancia desde el centro de la estrella[arcseconds]') pl.ylabel('Brillo relativo aparente de la estrella') pl.xlim(left=-0.1, right=6.0) # Rango de valores de anchura-completa @ altura media para el seeing. fwhms = np.arange(0.5, 3.5, 0.1) # Encuentra el índice en la malla de distancia que es cercana al tamaño de una fibra indx = np.abs(rs - theta).argmin() # Una lista para colectar la fracción de luz que pasa por la fibra para cada valor del seeing. fiberfracs = [] # Ciclo sobre todos los valores del seeing. for i, fwhm in enumerate(fwhms): # Trabajamos el perfil radial de la estrella. ms = moffatt(rs, fwhm) # Integramos esto para obtener la luz total dentro un cierto radio Is = 2. * np.pi * dr * np.cumsum(rs * ms) # Calculamos la fracción de la fibra para cada valor de r que pedimos. ffrac = Is / Is[-1] #Guardamos la fracción de la fibra para el radio correspondiente al tamaño de la fibra. fiberfracs.append(ffrac[indx]) fiberfracs = np.array(fiberfracs) pl.plot(fwhms, fiberfracs) pl.xlim(0.5, 3.0) pl.xlabel(r'$(FWHM) \ Seeing \ [{\rm arcseconds}]$') pl.ylabel(r'FIBER FRAC.') ###Output _____no_output_____ ###Markdown Entonces, a medida que el aire (altamente) [turbulento](https://es.wikipedia.org/wiki/Turbulencia) se mueve en la atmósfera, la luz de la galaxia se difumina dependiendo del "seeing". Cuando esto empeora, $\simeq 3.0$ segundos de arco, ¡el 60% de la luz se puede perder! DESI necesita algo como un segundo de arco para observar, de lo contrario, simplemente tiramos los datos. Pero finalmente, esto significa que podemos esperar que el 80% de la luz se capture en una exposición normal: ###Code fiberfrac = 0.8 profile *= fiberfrac # [electrons/exposure/pixel/M1] ###Output _____no_output_____ ###Markdown Ahora, dependiendo de las fases de la luna, cada fibra colocada en una galaxia también recibe una cantidad de "fondo" de luz (lunar) que se origina a partir de la luz _dispersada_ por la atmósfera. Este trasfondo depende en gran medida de las fases de la luna; para los ELG, debemos evitar observar cerca de la luna llena. Nota al margen, con un diámetro angular aparente de $0.5$ grados, la luna encajaría en $\approx 6 \times$ lado a lado en el campo de visión DESI (3,2 grados de diámetro). Un nivel típico para la luz de fondo es 6.4e-18 erg / cm$^2/s/$Angstrom / sq. segundo de arco, con un área de fibra proyectada dada por ###Code fib_area = np.pi * theta**2. # [sq. arcsecond] fib_area ###Output _____no_output_____ ###Markdown El nivel de _fondo_ correspondiente de fotones recibidos por un píxel DESI por segundo (como antes): ###Code background = 3.4e-18 # [erg/s/cm 2/ Angstrom/sq. arcsecond]. background *= fib_area background # [erg/s/cm 2/ Angstrom]. ###Output _____no_output_____ ###Markdown que convertimos de la misma manera que antes: ###Code background /= Energys # [fotones/s/cm2/Angstrom]. background *= dlambda # [fotones/s/cm2/pixel]. # Fotones del fondo, recibidos por un pixel de DESI por segundo (asumiendo que no hay perdida en la fibra). background *= Area # [fotones/s/pixel/M1]. # Eficiencia background *= QE # [electrones/s/pixel/M1]. background *= exptime # [electrones/exposición/pixel/M1]. background ###Output _____no_output_____ ###Markdown El ruido de fondo es Poisson, en promedio esperamos un nivel de fondo de electrones, pero para cualquier exposición dada habrá fluctuaciones de acuerdo con una [distribución](https://en.wikipedia.org/wiki/Poisson_distribution) conocida. Suponiendo que el número de electrones medidos está dominado por el fondo, la varianza que esperamos en el número de electrones medidos es la de una distribución de Poisson: ###Code pixel_variance = background # [electrones/exposición/pixel/M1]. noise = [] for p in background: noise.append(np.random.poisson(p, 1)[0]) noise = np.array(noise) noise data = profile + noise pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, noise) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposición/M1/pixel]') ###Output _____no_output_____ ###Markdown DESI tiene fibras dedicadas que apuntan al cielo, en lugar de a las galaxias. Esto permite medir el fondo del cielo para restar el nivel promedio: ###Code data -= background pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposición/M1/pixel]') ###Output _____no_output_____ ###Markdown ¡Necesitamos establecer si esto es suficiente! Este será un ejercicio de ajuste, como en la Introducción. Definiremos una métrica de mejor ajuste: $$\chi^2 = \sum_p \left ( \frac{D_p - A \cdot M_p}{\sigma_p} \right )^2$$ que calcula la distancia cuadrada acumulada (ponderada por error) de los datos del modelo. Donde $A$ representa el flujo de línea, $M$ es el modelo que definimos anteriormente y $\sigma_p$ es la desviación estándar (dominada por el fondo) de los electrones en cada píxel. Si derivamos esto con respecto a $A$, encontramos el flujo de línea que mejor se ajusta (recuerde, la verdad se definió anteriormente). $A = \left (\sum_p D_p M_p / \sigma_p^2 \right ) / \left (\sum_p M_p^2 / \sigma_p^2 \right )$m o ###Code # Flujo de línea estimado Mp = model(wave, width, z) * dlambda # [ergs/s/cm2/pixel] Mp /= Energys # [fotones/s/cm2/pixel]. Mp *= Area # [fotones/s/pixel/M1]. Mp *= QE # [electrones/s/pixel/M1]. Mp *= exptime # [electrones/exposición/pixel/M1]. Mp *= fiberfrac # [electrones/exposición/pixel/M1]. pl.plot(wave, data) pl.plot(wave, Mp * line_flux) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de Onda [Angstroms]') pl.ylabel('Flujo[electrones/exposure/M1/pixel]') est_line_flux = np.sum(data * Mp / pixel_variance) / np.sum(Mp**2. / pixel_variance) est_line_flux ###Output _____no_output_____ ###Markdown ¡Increíble! Hemos podido medir el flujo de la línea de nuestra galaxia de línea de emisión. Ahora bien, ¿cuál es el error en nuestra medición? Puedes obtener esto de la segunda derivada de $\chi^2$, $\sigma_A^{-2} = \left ( \frac{1}{2} \right ) \frac{\partial^2 \chi^2}{\partial^2 A} = \sum_p \frac{M_p^2}{\sigma_p^2}$. ###Code varA = np.sum(Mp**2 / pixel_variance) sigA = 1. / np.sqrt(varA) sigA ###Output _____no_output_____ ###Markdown Dando una relación señal-ruido de (cuántas veces más grande es la 'señal' que el ruido), $SNR = A / \sigma_A$. ###Code SNR = est_line_flux / sigA print('Para un línea OII con flujo de línea de {:.3e}, con una resolución {:.3f}, el SNR es {:.3f}!'.format(line_flux, R, SNR)) ###Output Para un línea OII con flujo de línea de 8.000e-17, con una resolución 9000.000, el SNR es 26.123! ###Markdown OII and more galaxies Ya sea mirando más allá o recolectando aquellas que nos hemos perdido anteriormente, ¡nuestra ciencia siempre mejora con más galaxias! El problema es que ya hemos analizado todas las galaxias fáciles y brillantes y las cosas se ponen más difíciles a medida que nos vemos obligados a observar las galaxias más débiles que conocemos. Tenemos que ser inteligentes sobre cómo hacemos esto y, a veces, se presenta una oportunidad sorprendente ... _Advertencia: este cuaderno aumenta la dificultad para permitirnos diseñar experimentos más divertidos basados ​​en lo que aprenderá aquí. Si tiene algún problema, [pregunte](https://github.com/michaelJwilson/DESI-HighSchool/issues/new/choose). ¡Quédate con eso!_ ¿Estás cansado de escuchar a tus padres? Los átomos sienten lo mismo. Su vida es una serie de reglas, reglas, reglas. Haz esto, no hagas aquello; la [lista](https://en.wikipedia.org/wiki/Selection_rule) es larga. Pero solo a veces, se cansan y se rebelan, ![title](images/Climate.png) Resulta que una rebelión, de vez en cuando, puede ser algo bueno. Por ejemplo, el oxígeno (doblemente) ionizado o [OII], (increíblemente raramente) emite un doblete único que no lo haría [de otra manera](https://en.wikipedia.org/wiki/Forbidden_mechanism). Veamos qué pasa. ###Code # Wavelengths of OII doublet. lambdaa = 3727.092 # Angstroms lambdab = 3729.875 # Angstorms # Energy weighted mean. OII = 3728.483 # Width of each line due to thermal broadening. def width(center, dv): # velocity difference [speed of light] return center * dv wave = np.arange(1000, 1.e4, 0.05) dlambdaa = width(lambdaa, 1.e-4) dlambdab = width(lambdab, 1.e-4) def gaussian(wave, center, width): # https://en.wikipedia.org/wiki/Gaussian_function norm = np.sqrt(2. * np.pi / width) return np.exp(-0.5 * (wave - center)**2. / width**2.) ax = pl.gca() ax.fill_between(wave, 0., gaussian(wave, lambdaa, dlambdaa), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, lambdab, dlambdab), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, 3889.0, width(3889.0, 1.e-4)), color='k', alpha=1.) pl.xlim(3700., 3900.) pl.ylim(0.25, 1.1) pl.xlabel('Wavelength [AA]') pl.ylabel('Normalised flux') ###Output _____no_output_____ ###Markdown Primero, las transiciones _prohibidas_ [OII] (azul) representan un doblete de dos líneas poco espaciadas. Estos tienen un ancho finito ya que las estrellas emisoras se mueven (al 0.01% en este ejemplo), lo que lleva al ensanchamiento Doppler habitual. Contraste esto con la línea negra He I, que es una sola línea o "singlete". El problema es que una sola línea emitida por una galaxia en un corrimiento al rojo puede parecer una línea diferente en otro corrimiento al rojo. Su turno, si hubiera un emisor Lyman-$\alpha$ en $z=4.0$, ¿podría notar la diferencia de un emisor H-$\alpha$ (6564.61 Angstroms) en un corrimiento al rojo diferente? ¿Qué corrimiento al rojo tendría esta segunda galaxia? Recuerde, la longitud de onda observada es $(1 + z) \ \times$ la longitud de onda del marco de reposo y Lyman-$\alpha$ es la transición 2-1 del hidrógeno que vimos en la introducción. Entonces, [OII] es único en el sentido de que, como doblete, es más probable que podamos distinguirlo de los singletes en diferentes corrimientos al rojo. La segunda gran cosa es que es la segunda línea más fuerte emitida por estrellas jóvenes (la primera es H-$\alpha$), como en las nebulosas de Orión, una imagen icónica de la formación de estrellas: Las galaxias con alto corrimiento al rojo son estrellas más jóvenes y en formación más activa, por lo que emiten gran cantidad de [OII]. Entonces, a medida que miramos más lejos, es más probable que veamos emisores de OII. Como estas galaxias están tan lejos, sería muy difícil detectar algo tan débil si no fuera por esta emisión de OII: ###Code zs = np.arange(0.01, 1.7, 0.01) lumdists = Planck15.luminosity_distance(zs) faints = (lumdists / lumdists[0])**2. pl.xlabel(r'$z$') pl.ylabel('Faintness relative to a galaxy in your lap') pl.semilogy(zs, faints) ###Output _____no_output_____ ###Markdown Según $z=0.25$, una galaxia es 1000 veces más débil de lo que sería en tu regazo. Por $z=1.75$, el ELG más lejano detectado por DESI, es 10,000 veces más débil (cuánto más débil depende de si hay Energía Oscura en el Universo; aquí asumimos el ~ 70% que aprendimos en la Introducción). [Astropy](https://docs.astropy.org/en/stable/index.html) hace que esto sea realmente fácil de entender, pero sería mucho mejor entender cómo llegar allí. Para tener una idea, intente [aquí](https://in-the-sky.org/article.php?term=cosmological_distance). Entonces, queremos galaxias de línea de emisión (ELG) con un doblete OII. Será mejor que nos aseguremos de que nuestro telescopio y nuestro instrumento para dispersar la luz sean capaces de detectar y "resolver" esta débil señal. Fundamentalmente, nuestro instrumento debe estar diseñado para asegurar que el doblete no sea borroso, ya que esto convertiría el doblete en un singlete y conduciría a la misma confusión que nos gustaría evitar. La pregunta es, ¿cómo deberíamos hacer esto? Sería un simple laboratorio. [prisma](https://en.wikipedia.org/wiki/Prism_spectrometer) es suficiente? La respuesta es no, el prisma tendría que ser demasiado grande y perder demasiada luz para lograr la dispersión (separación entre colores) requerida. Necesitamos algo más avanzado, una rejilla, que pueda dispersar la luz debido a la difracción (o reflexión) y la interferencia causada por una serie de rendijas grabadas en metal (con diamante). Consulte [aquí](https://en.wikipedia.org/wiki/Diffraction_grating) para obtener más detalles. De hecho, DESI usa una rejilla especial que cambia el [índice de refracción](https://en.wikipedia.org/wiki/Refractive_index) del vidrio, miles de veces por milímetro, para lograr el mismo [efecto](https:arxiv.org/pdf/1611.00037.pdf): Grabar estas líneas es costoso, por lo que debemos minimizar la cantidad que necesitamos. No desperdiciarías tu dinero de bolsillo, ¿verdad? Entonces, ¿qué resolución _necesitamos_ para esta ciencia de galaxias de línea de emisión (OII)? ¿Y qué significa eso para el instrumento que necesitamos construir? La resolución $R$ se define como $(\Delta \lambda /\lambda)$, donde $\Delta \lambda$ es el ancho efectivo de una línea (gaussiana). Entonces, a medida que la resolución instrumental disminuye, nuestras líneas observadas se amplían: ###Code def dlamba_inst(R, z, center): # eqn. (2) of https://arxiv.org/pdf/1310.0615.pdf return (1. + z) * center / R # [Angstroms] fig, ax = plt.subplots(1, 1, figsize=(10,10)) for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_inst(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') ax.set_xlabel('Wavelength [Angstroms]') ax.set_ylabel('Flux$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown Entonces, ¿una resolución de $R=50,000$ tendría sentido para DESI? No, ya que la línea sería más ancha debido simplemente a la velocidad térmica del gas emisor en la galaxia. Veamos esto. Si tenemos el ensanchamiento correcto debido a _tanto_ la velocidad de dispersión del gas emisor y el instrumento, el ancho se satura sin importar la resolución instrumental: ###Code def dlamba_tot(R, z, center, v=1.e-4): # Widths of Gaussians add in quadrature; (https://en.wikipedia.org/wiki/Propagation_of_uncertainty). return np.sqrt(dlamba_inst(R, z, center)**2. + width(center, v)**2.) fig, ax = plt.subplots(1, 1, figsize=(10,10)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_tot(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.set_xlabel('Wavelength [Angstroms]') ax.set_ylabel('Flux$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown ¡Entonces pueden ver que con un instrumento insuficiente, [OII] se volverá borroso y totalmente inútil para nosotros! Pero necesitamos saber qué es lo suficientemente bueno. Intentemos. La resolución $R$ define el elemento de resolución como $R= (\lambda / \Delta \lambda)$, como se indicó anteriormente, para una galaxia con desplazamiento al rojo $z$, por ejemplo: ###Code R = 9.e3 z = 1.00 ###Output _____no_output_____ ###Markdown dando el ancho de un elemento de resolución como ###Code dlambda = OII * (1 + z) / R # [Angstroms]. ###Output _____no_output_____ ###Markdown Un [teorema] muy famoso (https://en.wikipedia.org/wiki/Nyquist-Shannon _muestreo_ teorema) - por cierto, un punto de entrada a la [Teoría de la información] (https: //en.wikipedia .org / wiki / Information _teoría) y el mundo digital: nos dice que necesitamos muestrear un elemento de resolución al menos_ dos veces_ para reconstruir con precisión una función (de paso de banda limitado) sus muestras. Para estar seguros, lo probaremos tres veces, dado un ancho de píxel de 1/3 del ancho del elemento de resolución: ###Code # width of a pixel in Angstroms, rather than a resolution element. dlambda /= 3. # Let's match our wavelengths to this grid of pixels: wave = np.arange(3600, 1.e4, dlambda) ###Output _____no_output_____ ###Markdown Ahora, el Telescopio Mayall utilizado por DESI tiene un espejo (primario) de 3,8 m de diámetro, por lo que un área de ###Code # Area of the circular DESI primary mirror. Area = np.pi * (3.8 / 2.)**2. # [m] to [cm]. Area *= 1.e4 Area # [cm^2] ###Output _____no_output_____ ###Markdown con este espejo suavemente curvado para enfocar la luz a un punto en el [foco](https://en.wikipedia.org/wiki/Cassegrain_reflector), con una distancia focal de 10,7 m. Cuando DESI apunta al cielo, toma una instantánea de la luz captada por 5000 fibras individuales a la vez. Puedes ver 500 en un "pétalo" en forma de cuña debajo Cada fibra tiene un diámetro $w=107 \mu m$ o $10^{-4}m$ y 10 de los pétalos anteriores pueblan el plano focal DESI. Con la distancia focal de $f_{\rm{M1}} = 10.7$m, cada fibra recibe luz de un parche circular en el cielo de $\theta \simeq (w/2) \ / \ f_{\rm{M1}}$. ###Code # Angular radius of fiber, rather than diameter. theta = 107e-6 / 2 / 10.7 # [radians] theta *= 180. / np.pi # [degrees] theta *= 60. * 60. # [arcseconds] theta # [arcseconds] ###Output _____no_output_____ ###Markdown En realidad, la 'escala de la placa' varía de tal manera que una mejor aproximación es 1,5 segundos de arco. ###Code theta = 1.5 # [arcseconds] ###Output _____no_output_____ ###Markdown Cada fibra tiene un pequeño motor que puede viajar para observar cualquier galaxia dentro de cada círculo que se muestra: (Puedes ver más ejemplos con el [visor](https://www.legacysurvey.org/viewerIC%201229) ). La luz recibida por cada fibra se redirige a lo largo de una fibra óptica para finalmente aterrizar en un solo píxel de un CCD, donde cada fotón se convierte en un electrón por el [efecto fotoeléctrico](https://en.wikipedia.org/wiki/Photoelectric_effect): ¡uno de los primeros descubrimientos en Mecánica Cuántica de Einstein!Nuestro primo cercano, el Dark Energy Survey, observa en un gemelo idéntico al Mayall en Chile y tiene algunos de los [CCD] más bonitos (https://www.darkenergysurvey.org/the-des-project/instrument/the-camera/) alrededor (cada rectángulo). En total, se muestran sesenta y dos CCDS, con 2048 x 4096 píxeles cada uno, ¡para un total de 520 millones de píxeles! En comparación, los últimos iPhones tienen [12 millones de píxeles](https://www.iphonefaq.org/archives/976253). Ahora, el número de galaxias que necesitamos (17 millones de ELG) define la luminosidad de la línea (brillo de la cantidad) de [OII] que necesitamos alcanzar, ese es nuestro objetivo. ###Code line_flux = 8.e-17 # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown Hablemos de unidades. Un ergio es $10^{-7}$ Joules, por lo que es una cantidad muy pequeña de energía, en Joules, que llega por segundo, en un cm2. ###Code def model(wave, sigma, z, r=0.7): # Unit amplitude, sigma is the width of the line, z is the redshift and r is the relative amplitudes of the lines in the doublet. return 1. / (1. + r) / np.sqrt(2. * np.pi) / sigma * (r * np.exp(- ((wave - lambdaa * (1. + z)) / np.sqrt(2.) / sigma)**2.) + np.exp(- ((wave - lambdab * (1. + z)) / np.sqrt(2.) / sigma)**2.)) width = dlamba_inst(R, z, lambdaa) profile = model(wave, width, z) # [1/Angstrom]. profile *= line_flux # [ergs/s/cm2/Angstrom]. profile *= dlambda # [ergs/s/cm2/pixel]. pl.clf() pl.plot(wave, profile) pl.xlabel('Wavelength [Angstroms]') pl.ylabel('Flux [ergs/s/cm2/pixel]') pl.xlim((1. + z) * 3720., (1. + z) * 3740.) # Summing over pixels, gives us the total line flux again: np.sum(profile) # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown Mientras que la energía de cada OII [fotón](https://en.wikipedia.org/wiki/Photon) que recibimos se puede encontrar en $E=h \nu$, donde $h=6.626 \times 10^{-34} J \cdot s$ y una frecuencia encontrada por $c = \nu \cdot \lambda$. ###Code c = 2.9979e8 * 1.e10 # [Angstrom/s]. nus = c / wave # [Hertz] = [s^{-1}]. Energys = 6.626e-34 * nus # [Joules] Energys *= 1.e7 # [ergs] ###Output _____no_output_____ ###Markdown Entonces, la galaxia emisora ​​de OII más débil que podríamos observar daría como resultado que cada píxel DESI (en longitud de onda, 15 $\mu m$ en tamaño físico) reciba una cantidad de fotones por segundo dada por ###Code # ergs per ... to photons per ... profile /= Energys # [photons/s/cm2/pixel]. # Photons recieved by a DESI pixel per second (assuming no fiber loss). profile *= Area # [photons/s/pixel/M1]. # Total number of photons recieved by DESI from the source. np.sum(profile) # [photons/s/M1] ###Output _____no_output_____ ###Markdown Ahora, la eficiencia cuántica de un CCD no es del 100%, por lo que cada fotón no produce un electrón. Más bien, se producen de 60 electrones a 100 fotones (una eficiencia del 60%). ###Code QE = 0.6 profile *= QE # [electrons/s/pixel/M1]. ###Output _____no_output_____ ###Markdown Para contrarrestar esta ineficiencia, tomamos una exposición que dura 15 minutos durante los cuales los electrones se acumulan en los píxeles del CCD. ###Code exptime = 15. * 60. # [seconds] profile *= exptime # [electrons/exposure/pixel/M1] pl.plot(wave, profile) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Wavelength [Angstroms]') pl.ylabel('Flux [electrons/exposure/M1/pixel]') ###Output _____no_output_____ ###Markdown Pero hay otro pequeño problema. A medida que la luz de la galaxia viaja a través de la atmósfera, se agita de tal manera que aparece manchada en el cielo. El tamaño aparente de una estrella con forma de punto (en realidad) se debe a esto se conoce como "ver", en segundos de arco. El golpe puede ser tan fuerte, dependiendo del clima, que la luz de las estrellas se puede perder en la fibra incluso si está centrada correctamente. Veamos esto. ###Code def moffatt(r, fwhm, beta=3.5): ## Apparent radial profile of star-light due to buffeting by the atmosphere. ## Sec. 4 of https://iopscience.iop.org/article/10.1086/675808/pdf; [arcsecond]. alpha = fwhm / 2. / (2.**(1./beta) - 1.)**0.5 return (2. * (beta - 1.) / alpha / alpha) * (1. + (r/alpha)**2.)**-beta fwhm = 2.0 dr = 0.01 rs = np.arange(0.0, 15., dr) ## [arcseconds]. ms = moffatt(rs, fwhm) pl.axvline(theta, alpha=0.25, c='k') pl.plot(rs, ms, c='k') pl.xlabel('Distance from center of star [arcseconds]') pl.ylabel('Relative apparent brightness of star') pl.xlim(left=-0.1, right=6.0) # A range of full-width @ half max. values for the seeing. fwhms = np.arange(0.5, 3.5, 0.1) # Find the index in our distance grid closest to the size of a fiber. indx = np.abs(rs - theta).argmin() # A list to collect the fraction of light down a fiber for each value of the seeing. fiberfracs = [] # Loop over the seeing values. for i, fwhm in enumerate(fwhms): # Work out the radial profile of the star. ms = moffatt(rs, fwhm) # Integrate this to get the total light within a radius Is = 2. * np.pi * dr * np.cumsum(rs * ms) # Calculate the fiber fraction for each r value we as for. ffrac = Is / Is[-1] # Save the fiber fraction for the radius corresponding to the fiber size. fiberfracs.append(ffrac[indx]) fiberfracs = np.array(fiberfracs) pl.plot(fwhms, fiberfracs) pl.xlim(0.5, 3.0) pl.xlabel(r'$(FWHM) \ Seeing \ [{\rm arcseconds}]$') pl.ylabel(r'FIBER FRAC.') ###Output _____no_output_____ ###Markdown Entonces, a medida que el aire (altamente) [turbulento](https://en.wikipedia.org/wiki/Turbulence) se mueve en la atmósfera, la luz de la galaxia se difumina al tamaño del "ver". Cuando esto empeora, $\simeq 3.0$ segundos de arco, ¡el 60% de la luz se puede perder! DESI necesita algo como un segundo de arco para observar, de lo contrario, simplemente tiramos los datos. Pero finalmente, esto significa que podemos esperar que el 80% de la luz se capture en una exposición normal: ###Code fiberfrac = 0.8 profile *= fiberfrac # [electrons/exposure/pixel/M1] ###Output _____no_output_____ ###Markdown Ahora, dependiendo de las fases de la luna, cada fibra colocada en una galaxia también recibe una cantidad de "fondo" de luz (lunar) que se origina a partir de la luz _dispersada_ por la atmósfera. Este trasfondo depende en gran medida de las fases de la luna; para los ELG, debemos evitar observar cerca de la luna llena. Nota al margen, con un diámetro angular aparente de $0.5$ grados, la luna encajaría en $\approx 6 \times$ lado a lado en el campo de visión DESI (3,2 grados de diámetro). Un nivel típico para la luz de fondo es 6.4e-18 erg / cm$^2/s/$Angstrom / sq. segundo de arco, con un área de fibra proyectada dada por ###Code fib_area = np.pi * theta**2. # [sq. arcsecond] fib_area ###Output _____no_output_____ ###Markdown El nivel de _fondo_ correspondiente de fotones recibidos por un píxel DESI por segundo (como antes): ###Code background = 3.4e-18 # [erg/s/cm 2/ Angstrom/sq. arcsecond]. background *= fib_area background # [erg/s/cm 2/ Angstrom]. ###Output _____no_output_____ ###Markdown que convertimos de la misma manera que antes: ###Code background /= Energys # [photons/s/cm2/Angstrom]. background *= dlambda # [photons/s/cm2/pixel]. # Background photons recieved by a DESI pixel per second (assuming no fiber loss). background *= Area # [photons/s/pixel/M1]. # Quantum efficiency background *= QE # [electrons/s/pixel/M1]. background *= exptime # [electrons/exposure/pixel/M1]. background ###Output _____no_output_____ ###Markdown El ruido de fondo es Poisson, en promedio esperamos un nivel de fondo de electrones, pero para cualquier exposición dada habrá fluctuaciones de acuerdo con una [distribución] conocida (https://en.wikipedia.org/wiki/Poisson_distribution). Suponiendo que el número de electrones medidos está dominado por el fondo, la varianza que esperamos en el número de electrones medidos es la de una distribución de Poisson: ###Code pixel_variance = background # [electrons/exposure/pixel/M1]. noise = [] for p in background: noise.append(np.random.poisson(p, 1)[0]) noise = np.array(noise) noise data = profile + noise pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, noise) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Wavelength [Angstroms]') pl.ylabel('Flux [electrons/exposure/M1/pixel]') ###Output _____no_output_____ ###Markdown DESI tiene fibras dedicadas que apuntan al cielo, en lugar de a las galaxias. Esto permite medir el fondo del cielo para restar el nivel medio: ###Code data -= background pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Wavelength [Angstroms]') pl.ylabel('Flux [electrons/exposure/M1/pixel]') ###Output _____no_output_____ ###Markdown ¡Necesitamos establecer si esto es suficiente! Este será un ejercicio de adaptación, como en la Introducción. Definiremos una métrica de mejor ajuste: $\chi^2 = \sum_p \left ( \frac{D_p - A \cdot M_p}{\sigma_p} \right )^2$ que calcula la distancia cuadrada acumulada (ponderada por error) de los datos del modelo. Donde $A$ representa el flujo de línea, $M$ es el modelo que definimos anteriormente y $\sigma_p$ es la desviación estándar (dominada por el fondo) de los electrones en cada píxel. Si diferenciamos esto con respecto a $A$, encontramos el flujo de línea que mejor se ajusta (recuerde, la verdad se definió anteriormente). $A = \left (\sum_p D_p M_p / \sigma_p^2 \right ) / \left (\sum_p M_p^2 / \sigma_p^2 \right )$m o ###Code # Estimated line flux Mp = model(wave, width, z) * dlambda # [ergs/s/cm2/pixel] Mp /= Energys # [photons/s/cm2/pixel]. Mp *= Area # [photons/s/pixel/M1]. Mp *= QE # [electrons/s/pixel/M1]. Mp *= exptime # [electrons/exposure/pixel/M1]. Mp *= fiberfrac # [electrons/exposure/pixel/M1]. pl.plot(wave, data) pl.plot(wave, Mp * line_flux) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Wavelength [Angstroms]') pl.ylabel('Flux [electrons/exposure/M1/pixel]') est_line_flux = np.sum(data * Mp / pixel_variance) / np.sum(Mp**2. / pixel_variance) est_line_flux ###Output _____no_output_____ ###Markdown ¡Increíble! Hemos podido medir el flujo lineal de nuestra galaxia de línea de emisión. Ahora bien, ¿cuál es el error en nuestra medición? Puede obtener esto de la segunda derivada de $\chi^2$, $\sigma_A^{-2} = \left ( \frac{1}{2} \right ) \frac{\partial^2 \chi^2}{\partial^2 A} = \sum_p \frac{M_p^2}{\sigma_p^2}$. ###Code varA = np.sum(Mp**2 / pixel_variance) sigA = 1. / np.sqrt(varA) sigA ###Output _____no_output_____ ###Markdown Dando una relación señal-ruido de (cuántas veces más grande es la 'señal' que el ruido), $SNR = A / \sigma_A$. ###Code SNR = est_line_flux / sigA print('At a limiting OII line flux of {:.3e}, with a resolution {:.3f}, the SNR is {:.3f}!'.format(line_flux, R, SNR)) ###Output At a limiting OII line flux of 8.000e-17, with a resolution 9000.000, the SNR is 25.651! ###Markdown OII y más galaxias Ya sea observando cada vez más distantes o recolectando aquellas que hemos perdido anteriormente, ¡nuestra ciencia siempre mejora con más galaxias! El problema es que ya hemos analizado todas las galaxias fáciles y brillantes y las cosas se ponen más difíciles a medida que nos vemos obligados a observar las galaxias más débiles que conocemos. Tenemos que ser inteligentes sobre cómo hacemos esto y, a veces, se presenta una oportunidad sorprendente ... _Advertencia: este cuaderno aumenta la dificultad para permitirnos diseñar experimentos más divertidos basados ​​en lo que aprenderás aquí. Si tienes algún problema, [pregunta](https://github.com/michaelJwilson/DESI-HighSchool/issues/new/choose). ¡Quédate con nosotros!_ ¿Estás cansado de escuchar a tus padres? Los átomos sienten lo mismo. Su vida es una serie de reglas, reglas, reglas. Haz esto, no hagas aquello; la [lista](https://es.wikipedia.org/wiki/Transición_electrónica) es larga. Pero a veces, se cansan y se rebelan, ![title](images/Climate.png) Resulta que una rebelión, de vez en cuando, puede ser algo bueno. Por ejemplo, el oxígeno (doblemente) ionizado o [OII], (increíblemente raramente) emite un doblete único que no lo haría [de otra manera](https://es.wikipedia.org/wiki/L%C3%ADnea_prohibida). Veamos qué pasa. ###Code # Longitudes de onda del doblete OII. lambdaa = 3727.092 # Angstroms lambdab = 3729.875 # Angstorms # Energía promedio ponderado. OII = 3728.483 # Anchura de cada línea debido al ensanchamiento térmico. def width(center, dv): # diferencia de velocidad [velocidad de la luz] return center * dv wave = np.arange(1000, 1.e4, 0.05) dlambdaa = width(lambdaa, 1.e-4) dlambdab = width(lambdab, 1.e-4) def gaussian(wave, center, width): # https://es.wikipedia.org/wiki/Función_gaussiana norm = np.sqrt(2. * np.pi / width) return np.exp(-0.5 * (wave - center)**2. / width**2.) ax = pl.gca() ax.fill_between(wave, 0., gaussian(wave, lambdaa, dlambdaa), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, lambdab, dlambdab), color='b', alpha=1.0) ax.fill_between(wave, 0., gaussian(wave, 3889.0, width(3889.0, 1.e-4)), color='k', alpha=1.) pl.xlim(3700., 3900.) pl.ylim(0.25, 1.1) pl.xlabel('Longitud de onda [AA]') pl.ylabel('Flujo normalizado') ###Output _____no_output_____ ###Markdown Primero, las transiciones _prohibidas_ [OII] (azul) representan un doblete de dos líneas poco espaciadas. Estas tienen un ancho finito ya que las estrellas emisoras se mueven (al 0.01% en este ejemplo), lo que lleva al ensanchamiento Doppler habitual. Contrasta esto con la línea negra He I, que es una sola línea o "singlete". El problema es que una sola línea emitida por una galaxia a un desplazamiento al rojo dado puede parecer una línea diferente a otro desplazamiento al rojo. Tu turno, si hubiera un emisor Lyman-$\alpha$ en $z=4.0$, ¿podrías notar la diferencia de un emisor H-$\alpha$ (6564.61 Angstroms) a un desplazamiento al rojo diferente? ¿Qué desplazamiento al rojo tendría esta segunda galaxia? Recuerda, la longitud de onda observada es $(1 + z) \ \times$ la longitud de onda del marco en reposo, y Lyman-$\alpha$ es la transición 2-1 del hidrógeno que vimos en la introducción. Entonces, [OII] es único en el sentido de que, como doblete, es más probable que podamos distinguirlo de los singletes a diferentes desplazamientos al rojo. La segunda gran cosa es que es la segunda línea más fuerte emitida por estrellas jóvenes (la primera es H-$\alpha$), como en las nebulosas de Orión, una imagen icónica de la formación de estrellas: Las galaxias con alto corrimiento al rojo son estrellas más jóvenes y en formación más activa, por lo que emiten gran cantidad de [OII]. Entonces, a medida que miramos más lejos, es más probable que veamos emisores de OII. Como estas galaxias están tan lejos, sería muy difícil detectar algo tan débil si no fuera por esta emisión de OII: ###Code zs = np.arange(0.01, 1.7, 0.01) lumdists = Planck15.luminosity_distance(zs) faints = (lumdists / lumdists[0])**2. pl.xlabel(r'$z$') pl.ylabel('Debilidad relativa a la galaxia en tu regazo') pl.semilogy(zs, faints) ###Output _____no_output_____ ###Markdown A $z=0.25$, una galaxia es 1000 veces más débil de lo que sería en tu regazo. Para $z=1.75$, el ELG más lejano detectado por DESI, es 10,000 veces más débil (cuánto más débil depende de si hay Energía Oscura en el Universo; aquí asumimos el ~ 70% que aprendimos en la Introducción). [Astropy](https://docs.astropy.org/en/stable/index.html) hace que esto sea realmente fácil de entender, pero sería mucho mejor entender cómo llegar allí. Para tener una idea, intenta [aquí](https://in-the-sky.org/article.php?term=cosmological_distance). Entonces, queremos galaxias de línea de emisión (ELG) con un doblete OII. Será mejor que nos aseguremos de que nuestro telescopio y nuestro instrumento para dispersar la luz sean capaces de detectar y "resolver" esta débil señal. Fundamentalmente, nuestro instrumento debe estar diseñado para asegurar que el doblete no sea borroso, ya que esto convertiría el doblete en un singlete y conduciría a la misma confusión que nos gustaría evitar. La pregunta es, ¿cómo deberíamos hacer esto? ¿Sería un simple [prisma](https://es.wikipedia.org/wiki/Prisma_(óptica)) de laboratorio suficiente? La respuesta es no, el prisma tendría que ser demasiado grande y perder demasiada luz para lograr la dispersión (separación entre colores) requerida. Necesitamos algo más avanzado, una rejilla, que pueda dispersar la luz debido a la difracción (o reflexión) y la interferencia causada por una serie de rendijas grabadas en metal (con diamante). Consulta [aquí](https://es.wikipedia.org/wiki/Red_de_difracción) para obtener más detalles. De hecho, DESI usa una rejilla especial que cambia el [índice de refracción](https://es.wikipedia.org/wiki/Índice_de_refracción) del vidrio, miles de veces por milímetro, para lograr el mismo [efecto](https:arxiv.org/pdf/1611.00037.pdf): Grabar estas líneas es costoso, por lo que debemos minimizar la cantidad que necesitamos. No desperdiciarías dinero de tu bolsillo, ¿verdad? Entonces, ¿qué resolución _necesitamos_ para hacer ciencia con galaxias de línea de emisión (OII)? ¿Y qué significa eso para el instrumento que necesitamos construir? La resolución $R$ se define como $(\Delta \lambda /\lambda)$, donde $\Delta \lambda$ es el ancho efectivo de una línea (gaussiana). Entonces, a medida que la resolución instrumental disminuye, nuestras líneas observadas se amplían: ###Code def dlamba_inst(R, z, center): # ecuación (2) de https://arxiv.org/pdf/1310.0615.pdf return (1. + z) * center / R # [Angstroms] fig, ax = plt.subplots(1, 1, figsize=(10,10)) for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_inst(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') ax.set_xlabel('Longitud de onda [Angstroms]') ax.set_ylabel('Flujo$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown Entonces, ¿una resolución de $R=50,000$ tendría sentido para DESI? No, ya que la línea sería más ancha debido, simplemente, a la velocidad térmica del gas emisor en la galaxia. Veamos esto. Si tenemos el ensanchamiento correcto debido tanto a la velocidad de dispersión del gas emisor, como al instrumento, el ancho se satura sin importar la resolución instrumental: ###Code def dlamba_tot(R, z, center, v=1.e-4): # Widths of Gaussians add in quadrature; (https://en.wikipedia.org/wiki/Propagation_of_uncertainty). return np.sqrt(dlamba_inst(R, z, center)**2. + width(center, v)**2.) fig, ax = plt.subplots(1, 1, figsize=(10,10)) ax.plot(wave, gaussian(wave, lambdaa, dlambdaa), color='k', alpha=0.5, label='Thermal') for R in [1000., 2000., 3000., 4000., 5.e4]: ax.plot(wave, gaussian(wave, lambdaa, dlamba_tot(R, 0.25, lambdaa)), label='R={:.0f}'.format(R)) ax.set_xlabel('Longitud de onda [Angstroms]') ax.set_ylabel('Flujo$_{\lambda}$ [erg/s/cm$^2$/Angstrom]') ax.legend(frameon=False, loc=1) ax.set_xlim(3710., 3750.) ###Output _____no_output_____ ###Markdown ¡Entonces pueden ver que con un instrumento insuficiente, [OII] se volverá borroso y totalmente inútil para nosotros! Pero necesitamos saber qué es lo suficientemente bueno. Intentemos. La resolución $R$ define el elemento de resolución como $R= (\lambda / \Delta \lambda)$, como se indicó anteriormente, para una galaxia con desplazamiento al rojo $z$, por ejemplo: ###Code R = 9.e3 z = 1.00 ###Output _____no_output_____ ###Markdown dando el ancho de un elemento de resolución como ###Code dlambda = OII * (1 + z) / R # [Angstroms]. ###Output _____no_output_____ ###Markdown Un famoso [teorema](https://es.wikipedia.org/wiki/Teorema_de_muestreo_de_Nyquist-Shannon) - que por cierto es un punto de entrada a la [Teoría de la información](https://es.wikipedia.org/wiki/Teor%C3%ADa_de_la_información) y el mundo digital, nos dice que necesitamos muestrear un elemento de resolución al menos _dos veces_ para reconstruir con precisión la función (de paso de banda limitado) que muestrea. Para estar seguros, lo muestraremos tres veces, dado un ancho de píxel de 1/3 del ancho del elemento de resolución: ###Code # Ancho de un pixel en Angstroms, en lugar del elemento de resolución. dlambda /= 3. # Hagamos coincidir la longitud de onda con la malla de píxeles. wave = np.arange(3600, 1.e4, dlambda) ###Output _____no_output_____ ###Markdown Ahora, el Telescopio Mayall utilizado por DESI tiene un espejo (primario) de 3,8 m de diámetro, por lo que tiene un área de ###Code # Área del espejo primario, circular, de DESI. Area = np.pi * (3.8 / 2.)**2. # [m] to [cm]. Area *= 1.e4 Area # [cm^2] ###Output _____no_output_____ ###Markdown con este espejo suavemente curvado para enfocar la luz a un punto en el [foco](https://en.wikipedia.org/wiki/Cassegrain_reflector), con una distancia focal de 10,7 m. Cuando DESI apunta al cielo, capta instantáneamente de la luz por 5000 fibras individuales a la vez. Puedes ver 500 en un "pétalo" en forma de cuña debajo Cada fibra tiene un diámetro $w=107 \mu m$ o $10^{-4}m$ y 10 de los pétalos anteriores pueblan el plano focal DESI. Con la distancia focal de $f_{\rm{M1}} = 10.7$m, cada fibra recibe luz de un parche circular en el cielo de $\theta \simeq (w/2) \ / \ f_{\rm{M1}}$. ###Code # Radio angular de la fibra, en lugar del diámetro. theta = 107e-6 / 2 / 10.7 # [radianes] theta *= 180. / np.pi # [grados] theta *= 60. * 60. # [segundos de arco] theta # [segundos de arco] ###Output _____no_output_____ ###Markdown En realidad, la 'escala de la placa' varía de tal manera que una mejor aproximación es 1,5 segundos de arco. ###Code theta = 1.5 # [segundos de arco] ###Output _____no_output_____ ###Markdown Cada fibra tiene un pequeño motor que puede viajar para observar cualquier galaxia dentro de cada círculo que se muestra: (Puedes ver más ejemplos con el [visor](https://www.legacysurvey.org/viewerIC%201229) ). La luz recibida por cada fibra se redirige a lo largo de una fibra óptica para finalmente aterrizar en un solo píxel de un CCD, donde cada fotón se convierte en un electrón por el [efecto fotoeléctrico](https://es.wikipedia.org/wiki/Efecto_fotoeléctrico): ¡uno de los primeros descubrimientos en Mecánica Cuántica hecho por Einstein!Nuestro primo cercano, el Dark Energy Survey, observa en un gemelo idéntico al Mayall en Chile y tiene algunos de los [CCD](https://www.darkenergysurvey.org/the-des-project/instrument/the-camera/) más bonitos alrededor (cada rectángulo). En total, se muestran sesenta y dos CCDS, con 2048 x 4096 píxeles cada uno, ¡para un total de 520 millones de píxeles! En comparación, los últimos iPhones tienen [12 millones de píxeles](https://www.iphonefaq.org/archives/976253). Ahora, el número de galaxias que necesitamos (17 millones de ELG) define la luminosidad de la línea (brillo de la cantidad) de [OII] que necesitamos alcanzar, ese es nuestro objetivo. ###Code line_flux = 8.e-17 # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown Hablemos de unidades. Un ergio es $10^{-7}$ Joules, por lo que es una cantidad muy pequeña de energía, en Joules, que llega por segundo, en un cm2. ###Code def model(wave, sigma, z, r=0.7): # Unidad de amplitud, sigma es la anchura de la línea, z es el redshift y r es la amplitud relativa de ls líneas en el doblete. return 1. / (1. + r) / np.sqrt(2. * np.pi) / sigma * (r * np.exp(- ((wave - lambdaa * (1. + z)) / np.sqrt(2.) / sigma)**2.) + np.exp(- ((wave - lambdab * (1. + z)) / np.sqrt(2.) / sigma)**2.)) width = dlamba_inst(R, z, lambdaa) profile = model(wave, width, z) # [1/Angstrom]. profile *= line_flux # [ergs/s/cm2/Angstrom]. profile *= dlambda # [ergs/s/cm2/pixel]. pl.clf() pl.plot(wave, profile) pl.xlabel('Longitud de onda[Angstroms]') pl.ylabel('Flujo [ergs/s/cm2/pixel]') pl.xlim((1. + z) * 3720., (1. + z) * 3740.) # Sumando sobre píxeles, da el flujo total en la línea de nuevo. np.sum(profile) # [ergs/s/cm2]. ###Output _____no_output_____ ###Markdown La energía de cada OII [fotón](https://es.wikipedia.org/wiki/Fotón) que recibimos se puede calcular usando $E=h \nu$, donde $h=6.626 \times 10^{-34} J \cdot s$ y la frecuencia está dada por $c = \nu \cdot \lambda$. ###Code c = 2.9979e8 * 1.e10 # [Angstrom/s]. nus = c / wave # [Hertz] = [s^{-1}]. Energys = 6.626e-34 * nus # [Joules] Energys *= 1.e7 # [ergs] ###Output _____no_output_____ ###Markdown Entonces, la galaxia emisora ​​de OII más débil que podríamos observar daría como resultado que cada píxel de DESI (en longitud de onda, 15 $\mu m$ en tamaño físico) reciba una cantidad de fotones por segundo dada por ###Code # ergs per ... a fotones per ... profile /= Energys # [photons/s/cm2/pixel]. # Fotones recibidos en un pixel de DESI por segundo (asumiendo que no hay pérdidas en las fibras). profile *= Area # [photons/s/pixel/M1]. # Número total de fotones recibidos por DESI desde la fuente. np.sum(profile) # [photons/s/M1] ###Output _____no_output_____ ###Markdown Ahora, la eficiencia cuántica de un CCD no es del 100%, por lo que cada fotón no produce un electrón. Más bien, se producen a razón de 60 electrones en 100 fotones (una eficiencia del 60%). ###Code QE = 0.6 profile *= QE # [electrons/s/pixel/M1]. ###Output _____no_output_____ ###Markdown Para contrarrestar esta ineficiencia tomamos una exposición que dura 15 minutos, durante los cuales los electrones se acumulan en los píxeles del CCD. ###Code exptime = 15. * 60. # [segundos] profile *= exptime # [electrones/exposición/pixel/M1] pl.plot(wave, profile) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposure/M1/pixel]') ###Output _____no_output_____ ###Markdown Pero hay otro pequeño problema. A medida que la luz de la galaxia viaja a través de la atmósfera, se agita de tal manera que aparece manchada en el cielo. El tamaño aparente (en segundos de arco) de una estrella que en realidad debería verse como un punto se debe a lo que se conoce como ["seeing"](https://es.wikipedia.org/wiki/Seeing). El golpeteo puede ser tan fuerte, dependiendo del clima, que la luz de las estrellas se puede perder en la fibra incluso si está centrada correctamente. Veamos esto. ###Code def moffatt(r, fwhm, beta=3.5): ## Perfil radial aparente de la luz de la estrella debido al golpeteo de la atmósfera. ## Sección 4 de https://iopscience.iop.org/article/10.1086/675808/pdf; [arcsecond]. alpha = fwhm / 2. / (2.**(1./beta) - 1.)**0.5 return (2. * (beta - 1.) / alpha / alpha) * (1. + (r/alpha)**2.)**-beta fwhm = 2.0 dr = 0.01 rs = np.arange(0.0, 15., dr) ## [arcseconds]. ms = moffatt(rs, fwhm) pl.axvline(theta, alpha=0.25, c='k') pl.plot(rs, ms, c='k') pl.xlabel('Distancia desde el centro de la estrella[arcseconds]') pl.ylabel('Brillo relativo aparente de la estrella') pl.xlim(left=-0.1, right=6.0) # Rango de valores de anchura-completa @ altura media para el seeing. fwhms = np.arange(0.5, 3.5, 0.1) # Encuentra el índice en la malla de distancia que es cercana al tamaño de una fibra indx = np.abs(rs - theta).argmin() # Una lista para colectar la fracción de luz que pasa por la fibra para cada valor del seeing. fiberfracs = [] # Ciclo sobre todos los valores del seeing. for i, fwhm in enumerate(fwhms): # Trabajamos el perfil radial de la estrella. ms = moffatt(rs, fwhm) # Integramos esto para obtener la luz total dentro un cierto radio Is = 2. * np.pi * dr * np.cumsum(rs * ms) # Calculamos la fracción de la fibra para cada valor de r que pedimos. ffrac = Is / Is[-1] #Guardamos la fracción de la fibra para el radio correspondiente al tamaño de la fibra. fiberfracs.append(ffrac[indx]) fiberfracs = np.array(fiberfracs) pl.plot(fwhms, fiberfracs) pl.xlim(0.5, 3.0) pl.xlabel(r'$(FWHM) \ Seeing \ [{\rm arcseconds}]$') pl.ylabel(r'FIBER FRAC.') ###Output _____no_output_____ ###Markdown Entonces, a medida que el aire (altamente) [turbulento](https://es.wikipedia.org/wiki/Turbulencia) se mueve en la atmósfera, la luz de la galaxia se difumina dependiendo del "seeing". Cuando esto empeora, $\simeq 3.0$ segundos de arco, ¡el 60% de la luz se puede perder! DESI necesita algo como un segundo de arco para observar, de lo contrario, simplemente tiramos los datos. Pero finalmente, esto significa que podemos esperar que el 80% de la luz se capture en una exposición normal: ###Code fiberfrac = 0.8 profile *= fiberfrac # [electrons/exposure/pixel/M1] ###Output _____no_output_____ ###Markdown Ahora, dependiendo de las fases de la luna, cada fibra colocada en una galaxia también recibe una cantidad de "fondo" de luz (lunar) que se origina a partir de la luz _dispersada_ por la atmósfera. Este trasfondo depende en gran medida de las fases de la luna; para los ELG, debemos evitar observar cerca de la luna llena. Nota al margen, con un diámetro angular aparente de $0.5$ grados, la luna encajaría en $\approx 6 \times$ lado a lado en el campo de visión DESI (3,2 grados de diámetro). Un nivel típico para la luz de fondo es 6.4e-18 erg / cm$^2/s/$Angstrom / sq. segundo de arco, con un área de fibra proyectada dada por ###Code fib_area = np.pi * theta**2. # [sq. arcsecond] fib_area ###Output _____no_output_____ ###Markdown El nivel de _fondo_ correspondiente de fotones recibidos por un píxel DESI por segundo (como antes): ###Code background = 3.4e-18 # [erg/s/cm 2/ Angstrom/sq. arcsecond]. background *= fib_area background # [erg/s/cm 2/ Angstrom]. ###Output _____no_output_____ ###Markdown que convertimos de la misma manera que antes: ###Code background /= Energys # [fotones/s/cm2/Angstrom]. background *= dlambda # [fotones/s/cm2/pixel]. # Fotones del fondo, recibidos por un pixel de DESI por segundo (asumiendo que no hay perdida en la fibra). background *= Area # [fotones/s/pixel/M1]. # Eficiencia background *= QE # [electrones/s/pixel/M1]. background *= exptime # [electrones/exposición/pixel/M1]. background ###Output _____no_output_____ ###Markdown El ruido de fondo es Poisson, en promedio esperamos un nivel de fondo de electrones, pero para cualquier exposición dada habrá fluctuaciones de acuerdo con una [distribución](https://en.wikipedia.org/wiki/Poisson_distribution) conocida. Suponiendo que el número de electrones medidos está dominado por el fondo, la varianza que esperamos en el número de electrones medidos es la de una distribución de Poisson: ###Code pixel_variance = background # [electrones/exposición/pixel/M1]. noise = [] for p in background: noise.append(np.random.poisson(p, 1)[0]) noise = np.array(noise) noise data = profile + noise pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, noise) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposición/M1/pixel]') ###Output _____no_output_____ ###Markdown DESI tiene fibras dedicadas que apuntan al cielo, en lugar de a las galaxias. Esto permite medir el fondo del cielo para restar el nivel promedio: ###Code data -= background pl.plot(wave, profile, alpha=0.5) pl.plot(wave, background, alpha=0.5) pl.plot(wave, data) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de onda [Angstroms]') pl.ylabel('Flujo [electrones/exposición/M1/pixel]') ###Output _____no_output_____ ###Markdown ¡Necesitamos establecer si esto es suficiente! Este será un ejercicio de ajuste, como en la Introducción. Definiremos una métrica de mejor ajuste: $$\chi^2 = \sum_p \left ( \frac{D_p - A \cdot M_p}{\sigma_p} \right )^2$$ que calcula la distancia cuadrada acumulada (ponderada por error) de los datos del modelo. Donde $A$ representa el flujo de línea, $M$ es el modelo que definimos anteriormente y $\sigma_p$ es la desviación estándar (dominada por el fondo) de los electrones en cada píxel. Si derivamos esto con respecto a $A$, encontramos el flujo de línea que mejor se ajusta (recuerde, la verdad se definió anteriormente). $A = \left (\sum_p D_p M_p / \sigma_p^2 \right ) / \left (\sum_p M_p^2 / \sigma_p^2 \right )$m o ###Code # Flujo de línea estimado Mp = model(wave, width, z) * dlambda # [ergs/s/cm2/pixel] Mp /= Energys # [fotones/s/cm2/pixel]. Mp *= Area # [fotones/s/pixel/M1]. Mp *= QE # [electrones/s/pixel/M1]. Mp *= exptime # [electrones/exposición/pixel/M1]. Mp *= fiberfrac # [electrones/exposición/pixel/M1]. pl.plot(wave, data) pl.plot(wave, Mp * line_flux) pl.xlim((1. + z) * 3720., (1. + z) * 3740.) pl.xlabel('Longitud de Onda [Angstroms]') pl.ylabel('Flujo[electrones/exposure/M1/pixel]') est_line_flux = np.sum(data * Mp / pixel_variance) / np.sum(Mp**2. / pixel_variance) est_line_flux ###Output _____no_output_____ ###Markdown ¡Increíble! Hemos podido medir el flujo de la línea de nuestra galaxia de línea de emisión. Ahora bien, ¿cuál es el error en nuestra medición? Puedes obtener esto de la segunda derivada de $\chi^2$, $\sigma_A^{-2} = \left ( \frac{1}{2} \right ) \frac{\partial^2 \chi^2}{\partial^2 A} = \sum_p \frac{M_p^2}{\sigma_p^2}$. ###Code varA = np.sum(Mp**2 / pixel_variance) sigA = 1. / np.sqrt(varA) sigA ###Output _____no_output_____ ###Markdown Dando una relación señal-ruido de (cuántas veces más grande es la 'señal' que el ruido), $SNR = A / \sigma_A$. ###Code SNR = est_line_flux / sigA print('Para un línea OII con flujo de línea de {:.3e}, con una resolución {:.3f}, el SNR es {:.3f}!'.format(line_flux, R, SNR)) ###Output _____no_output_____
notebooks/02-solr/04-solr-rerank.ipynb
###Markdown Reranking with Solr LTR Model ###Code import json import os import random import requests import urllib SOLR_URL = "http://localhost:8983/solr/tmdbindex/" QUERY_LIST = [ "murder", "musical", "biography", "police", "world war ii", "comedy", "superhero", "nazis", "romance", "martial arts", "extramarital", "spy", "vampire", "magic", "wedding", "sport", "prison", "teacher", "alien", "dystopia" ] TOP_N = 10 def rating2label(rating): """ convert 0-10 continuous rating to 1-5 categorical labels """ return int(rating // 2) + 1 def get_rating_string(rating): rating_string = [] for i in range(rating): rating_string.append(u"\u2605") for i in range(5 - rating): rating_string.append(u"\u2606") return "".join(rating_string) print(get_rating_string(3)) print(get_rating_string(rating2label(6.4))) query = QUERY_LIST[random.randint(0, len(QUERY_LIST))] if len(query.split()) > 1: query = "\"" + query + "\"" ###Output _____no_output_____ ###Markdown Top 20 results without LTR ###Code def render_results(docs, query, top_n): print("top {:d} results for {:s}".format(TOP_N * 2, query)) print("---") for doc in resp_json["response"]["docs"]: doc_id = int(doc["id"]) stars = get_rating_string(rating2label(float(doc["rating_f"]))) score = float(doc["score"]) title = doc["title_t"] print("{:s} {:06d} {:.3f} {:s}".format(stars, doc_id, score, title)) payload = { "q": query, "defType": "edismax", "qf": "title_t description_t", "pf": "title_t description_t", "mm": 2, "fl": "id,title_t,rating_f,score", "rows": TOP_N * 2 } params = urllib.parse.urlencode(payload, quote_via=urllib.parse.quote_plus) search_url = SOLR_URL + "select?" + params resp = requests.get(search_url) resp_json = json.loads(resp.text) docs = resp_json["response"]["docs"] render_results(docs, query, TOP_N * 2) ###Output top 20 results for "world war ii" --- ★★★★☆ 039485 14.994 Hotel Sahara ★★★★☆ 143335 14.659 The Gathering Storm ★★★☆☆ 166610 14.659 The Ducktators ★★★★☆ 030298 14.497 The Secret of Santa Vittoria ★★★★☆ 043313 14.339 The Teahouse of the August Moon ★★★☆☆ 035954 14.339 Cornered ★★★☆☆ 074474 14.339 Varian's War ★★★☆☆ 165300 14.184 Hotel Berlin ★★★★☆ 029032 14.184 The Secret Invasion ★★★☆☆ 034945 14.184 The Conspirators ★★★★☆ 004820 14.032 Never So Few ★★★☆☆ 343070 14.004 Flight World War II ★★★★☆ 027367 13.883 Mrs. Miniver ★★★★☆ 022905 13.875 The Rape of Europa ★★★★☆ 011589 13.738 Kelly's Heroes ★★★★☆ 051044 13.738 Carmen Jones ★★★★☆ 044480 13.738 Education for Death ★★★★☆ 048882 13.738 Podranki ★★★★☆ 018884 13.596 Nuremberg ★☆☆☆☆ 118443 13.596 Nothing Too Good for a Cowboy ###Markdown Top 20 results with LTR (top 10) ###Code payload = { "q": query, "defType": "edismax", "qf": "title_t description_t", "pf": "title_t description_t", "mm": 2, "rq": "{!ltr model=myLambdaMARTModel reRankDocs=10 efi.query=" + query + "}", "fl": "id,title_t,rating_f,score", "rows": TOP_N * 2 } params = urllib.parse.urlencode(payload, quote_via=urllib.parse.quote_plus) search_url = SOLR_URL + "select?" + params resp = requests.get(search_url) resp_json = json.loads(resp.text) docs = resp_json["response"]["docs"] render_results(docs, query, TOP_N * 2) ###Output top 20 results for "world war ii" --- ★★★★☆ 030298 -1.897 The Secret of Santa Vittoria ★★★★☆ 143335 -2.010 The Gathering Storm ★★★☆☆ 074474 -2.055 Varian's War ★★★☆☆ 034945 -2.166 The Conspirators ★★★★☆ 029032 -2.174 The Secret Invasion ★★★☆☆ 035954 -2.281 Cornered ★★★☆☆ 165300 -2.281 Hotel Berlin ★★★★☆ 039485 -2.352 Hotel Sahara ★★★☆☆ 166610 -2.611 The Ducktators ★★★★☆ 043313 -2.683 The Teahouse of the August Moon ★★★★☆ 004820 14.032 Never So Few ★★★☆☆ 343070 14.004 Flight World War II ★★★★☆ 027367 13.883 Mrs. Miniver ★★★★☆ 022905 13.875 The Rape of Europa ★★★★☆ 011589 13.738 Kelly's Heroes ★★★★☆ 051044 13.738 Carmen Jones ★★★★☆ 044480 13.738 Education for Death ★★★★☆ 048882 13.738 Podranki ★★★★☆ 018884 13.596 Nuremberg ★☆☆☆☆ 118443 13.596 Nothing Too Good for a Cowboy
q1_cell_based_qubicc_r2b5/source_code/preprocessing_qubicc.ipynb
###Markdown Preprocessing QubiccConverting the data into npy makes it possible for us to work with it efficiently; originally we require 500GB of RAM which is always difficult to guarantee. We preprocess QUBICC in another ipynb notebook precisely because of this issue.1) We read the data2) Reshape variables so that they have equal dimensionality3) Reshape into data samples fit for the NN and convert into a DataFrame4) Downsample the data: Remove data above 21kms, remove condensate-free clouds, combat class-imbalance5) Split into input and output6) Save as npyNote: We neither scale nor split the data into training/validation/test sets. The reason is that i) in order to scale we need the entire dataset but this can only be done in conjunction with the Qubicc dataset. Also for cross-validation different scalings will be necessary based on different subsets of the data, ii) The split into subsets will be done by the cross-validation procedure or not at all when training the final model. ###Code # Ran with 900GB import sys import xarray as xr import numpy as np import matplotlib.pyplot as plt import pandas as pd import time # import importlib # importlib.reload(my_classes) base_path = '/pf/b/b309170' output_path = base_path + '/my_work/icon-ml_data/cloud_cover_parameterization/grid_cell_based_QUBICC_R02B05/based_on_var_interpolated_data' # Add path with my_classes to sys.path sys.path.insert(0, base_path + '/workspace_icon-ml/cloud_cover_parameterization/') # Which days to load days_qubicc = 'all_hcs' from my_classes import load_data VERT_LAYERS = 31 ## Parameters for the notebook #Set a numpy seed for the permutation later on! np.random.seed(10) # Set output_var to one of {'cl', 'cl_area'} output_var = 'cl_area' ###Output _____no_output_____ ###Markdown Reading the data Input:- fr_land: Fraction of land- coriolis: Coriolis parameter- zg: Geometric height at full levels (3D)- qv: Specific water vapor content (3D)- qc: Specific cloud water content (3D)- qi: Specific cloud ice content (3D)- temp: Temperature (3D)- pres: Pressure (3D)- u: Zonal wind (3D)- v: Meridional wind (3D)$10$ input nodes Output:- clc: Cloud Cover$1$ output nodesThe data above 21km is capped. ###Code # For cl_area I only need the output as I already have the input # I still need 'clw', 'cli', 'cl' for condensate-free clouds # If I were to use 'cl_area' for condensate-free clouds I would get an estimate # which is slightly different due to coarse-graining order_of_vars_qubicc = ['hus', 'clw', 'cli', 'ta', 'pfull', 'ua', 'va', 'zg', 'coriolis', 'fr_land', output_var] # Load QUBICC data data_dict = load_data(source='qubicc', days=days_qubicc, resolution='R02B05', order_of_vars=order_of_vars_qubicc) for key in data_dict.keys(): print(key, data_dict[key].shape) (TIME_STEPS, VERT_LAYERS, HORIZ_FIELDS) = data_dict[output_var].shape try: #Reshaping into nd-arrays of equaling shapes (don't reshape in the vertical) data_dict['zg'] = np.repeat(np.expand_dims(data_dict['zg'], 0), TIME_STEPS, axis=0) data_dict['coriolis'] = np.repeat(np.expand_dims(data_dict['coriolis'], 0), TIME_STEPS, axis=0) data_dict['coriolis'] = np.repeat(np.expand_dims(data_dict['coriolis'], 1), VERT_LAYERS, axis=1) data_dict['fr_land'] = np.repeat(np.expand_dims(data_dict['fr_land'], 0), TIME_STEPS, axis=0) data_dict['fr_land'] = np.repeat(np.expand_dims(data_dict['fr_land'], 1), VERT_LAYERS, axis=1) except: pass # Remove the first timesteps of the QUBICC simulations since the clc values are 0 across the entire earth there # Convert the data to float32! remove_steps = [] for i in range(data_dict[output_var].shape[0]): if np.all(data_dict[output_var][i,4:,:] == 0): remove_steps.append(i) TIME_STEPS = TIME_STEPS - 1 for key in data_dict.keys(): data_dict[key] = np.float32(np.delete(data_dict[key], remove_steps, axis=0)) # Our Neural Network has trained with clc in [0, 100]! data_dict[output_var] = 100*data_dict[output_var] np.max(data_dict[output_var][:, 4:, :]) # Carry along information about the vertical layer of a grid cell. int16 is sufficient for < 1000. vert_layers = np.int16(np.repeat(np.expand_dims(np.arange(1, VERT_LAYERS+1), 0), TIME_STEPS, axis=0)) vert_layers = np.repeat(np.expand_dims(vert_layers, 2), HORIZ_FIELDS, axis=2) vert_layers.shape ### Subsample QUBICC data further # We reduce the data size to using only every three hours from the QUBICC data. # The reason is that training is almost impossible with a total data size of 3.6 Billion samples (from NARVAL we have 126 Mio samples). # To make it feasible we would need a training batch size of ~5000. # Therefore we need to decrease the amount of samples further. # We decrease the amount of QUBICC samples as they are less reliable than the NARVAL samples. # We split the dataset in half by only taking into account every three hours (we assume # a relatively high temporal correlation). for key in order_of_vars_qubicc: data_dict[key] = data_dict[key][0::3] vert_layers = vert_layers[0::3] # Reshaping into 1D-arrays and converting dict into a DataFrame-object (the following is based on Aurelien Geron) # Remove data above 21kms for key in order_of_vars_qubicc: data_dict[key] = np.reshape(data_dict[key][:, 4:, :], -1) vert_layers = np.reshape(vert_layers[:, 4:, :], -1) for key in data_dict.keys(): print(key, data_dict[key].shape) df = pd.DataFrame.from_dict(data_dict) # Number of samples/rows len(df) import gc del data_dict gc.collect() ###Output _____no_output_____ ###Markdown **Downsampling the data (minority class: clc = 0)** ###Code # There are no nans left assert np.all(np.isnan(df) == False) == True # Remove condensate-free clouds (7.3% of clouds) df = df.loc[~((df['cl'] > 0) & (df['clw'] == 0) & (df['cli'] == 0))] # We ensure that clc != 0 is twice as large as clc = 0 (which then has 294 Mio samples) and keep the original order intact df_noclc = df.loc[df['cl']==0] print(len(df_noclc)) # len(downsample_indices) will be the number of noclc samples that remain downsample_ratio = (len(df) - len(df_noclc))/len(df_noclc) shuffled_indices = np.random.permutation(df.loc[df['cl']==0].index) size_noclc = int(len(df_noclc)*downsample_ratio)//2 #Different from other notebooks. Division by 2 here. downsample_indices = shuffled_indices[:size_noclc] # Concatenate df.loc[df[output_var]!=0].index and downsample_indices final_indices = np.concatenate((downsample_indices, df.loc[df['cl']!=0].index)) # Sort final_indices so that we can more or less recover the timesteps final_indices = np.sort(final_indices) # Label-based (loc) not positional-based df = df.loc[final_indices] # Number of samples after downsampling len(df) #Modifies df as well def split_input_output(dataset): output_df = dataset[output_var] del dataset[output_var] return output_df output_df = split_input_output(df) # Save the data if output_var == 'cl': np.save(output_path + '/cloud_cover_input_qubicc.npy', df) np.save(output_path + '/cloud_cover_output_qubicc.npy', output_df) elif output_var == 'cl_area': np.save(output_path + '/cloud_area_output_qubicc.npy', output_df) # Save the corresponding vertical layers if output_var == 'cl': np.save(output_path + '/samples_vertical_layers_qubicc.npy', vert_layers[df.index]) ###Output _____no_output_____ ###Markdown Some tests of the cloud area output Test whether qi from the saved data coincides with the qi here ###Code if output_var == 'cl_area': old_input = np.load(output_path + '/cloud_cover_input_qubicc.npy') # If this yields True then we're done print(np.all(old_input[:,2] == df['cli'])) clc = np.load(output_path + '/cloud_cover_output_qubicc.npy') cl_area = np.load(output_path + '/cloud_area_output_qubicc.npy') diff = cl_area - clc plt.hist(diff, bins = 100) plt.show() plt.hist(diff, bins = 100, log=True) plt.show() # These are anomalies existing due to differences in coarse-graining len(np.where(diff < 0)[0]) len(np.where(diff > 0)[0]) len(np.where(diff >= 0)[0]) len(np.where(diff < 0)[0])/len(diff) # 1.56% of the data len(np.where(diff < 0)[0])/len(np.where(diff != 0)[0]) # 2.36% of cloudy data ###Output _____no_output_____
018_time_series_sktime.ipynb
###Markdown Time Series Forecasting with SKtimeSKtime provides a familiar interface to time series functionality. Common Lingo forecaster = model ForecasterHorizon = what to create predictions for InstallYou'll need to install the library sktime for this to work, so:pip install sktimeconda install sktimeConda UI to installYou'll also need to instal separate packages for ARIMA - pmdarima, and for prophet - prophet. One thing we can try here is to copy an environment, just in case we break anything. We can clone an environment using: conda create --name tmp_test --clone ml3950 where ml3950 is the current environment, and tmp_test is the new oneThis is a good chance to try it before installing. Load Airline Data ###Code y = load_airline() plot_series(y) ###Output _____no_output_____ ###Markdown Train-Test SplitSince time series runs sequentially, the train-test split for error calcualtions is normally just chopping off the last part of the sequence and use that to test. Sktime provides a dedicated function to do so. You'll commonly see array notation to slice before/after as well. Here we take the final 36 months for testing. We also make a forecast horizion, this one is set to be the months of the test data, since we are evaluating accuracy. ###Code y_train, y_test = temporal_train_test_split(y, test_size=36) fh = ForecastingHorizon(y_test.index, is_relative=False) ###Output _____no_output_____ ###Markdown Exponential Smoothing. We can guess that the period is 12 months since it looks like a yearly pattern. We can also try to capture the trend and the seasonality. Additive or MultiplicitiveRule of thumb: if the difference is changing over time -> multiplicitive, if it is constant -> additive. Here the size of the seasonal swings seems to be getting larger, so that is multiplicitive. The trend seems to be a constant increase, so additive. We can see how to test these later - it is not always obvious. ###Code from sktime.forecasting.exp_smoothing import ExponentialSmoothing forecaster = ExponentialSmoothing(trend="add", seasonal="mul", sp=12) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_squared_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown Results - it looks like our model was pretty good. What if we want to make predictions into the future? We need to modify the forecasting horizion to, well, look into the horizon, then it is pretty similar. We can give the month indicies for the future months, as well as an argument "is_relative" that will tell sktiem to pick up at the end. We can also retrain the model to use all the data, since we are done evaluating the model here. ###Code # Next 6 years dates_range = list(range(0, 72)) fh_long = ForecastingHorizon(values=dates_range, is_relative=True) forecaster.fit(y) y_pred = forecaster.predict(fh_long) plot_series(y, y_pred, labels=["y_train", "y_pred"]) ###Output _____no_output_____ ###Markdown ARIMAWe can try a similar approach with an ARIMA model. ###Code from sktime.forecasting.arima import ARIMA ###Output _____no_output_____ ###Markdown AD Fuller Test for D TermThe number of .diff()s in the code is the number of differences that we are introducing. Having one yeilded a value very close to .05, so we can try that for D. ###Code from statsmodels.tsa.stattools import adfuller result = adfuller(y.diff().dropna()) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) ###Output ADF Statistic: -2.829267 p-value: 0.054213 ###Markdown Partial Autocorrelation and Partial Autocorrelation for MA and AR. Determining p and q is a little more haphazard. Below is a processes to look for them with the ACF and PACF charts. In short, we can look for a starting point, then potentially adjust. We will have a solution for this soon...Process:If the PACF of the differenced series shows a sharp cut off and/or the lag1 autocorrelation is positive (this indicates an ‘under- differenced’ series) while the ACF decays more slowly , then consider adding an AR term to the model. The number of AR terms will depend on the lag at which PACF cuts off.If the ACF of the differenced series shows a sharp cut off and/or the lag1 autocorrelation is negative (this indicates an ‘over- differenced’ series) while the PACF decays more slowly , then consider adding MA term to the model. Here, the autocorrelation pattern is explained more by adding the MA terms. The number of MA terms will depend on the lag at which ACF cuts off.An AR term is associated with under-differencing or positive auto correlation at lag 1while an MA term is associated with over-differencing or negative auto correlation at lag 1. ###Code from statsmodels.graphics.tsaplots import plot_acf, plot_pacf # ACF/PACF plot of 1st differenced series fig, axes = plt.subplots(1, 2) plot_acf(y.diff().dropna(), ax=axes[0]) plot_pacf(y.diff().dropna(), ax=axes[1]) plt.show() ###Output _____no_output_____ ###Markdown We can try: AR (p) - 1 I (d) - 1 MA (q) - 1 SeasonalityWe can figure out the same things for the seasonal trend.Seasonality - we can guess pretty easily that it is a one year pattern, so we can include that as m. The seasonal_order attributes are: P: Seasonal autoregressive order D: Seasonal difference order Q: Seasonal moving average order m: The number of time steps for a single seasonal period Check DAD Fuller test. p Value is small, no differencing needed. ###Code result = adfuller((y-y.shift(12)).dropna()) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) ###Output ADF Statistic: -3.383021 p-value: 0.011551 ###Markdown Check P and Q ###Code fig, axes = plt.subplots(1, 2) plot_acf((y-y.shift(12)).dropna(), ax=axes[0]) plot_pacf((y-y.shift(12)).dropna(), ax=axes[1]) plt.show() forecaster = ARIMA(order=(1, 1, 1), seasonal_order=(2, 0, 0, 12), suppress_warnings=True) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_squared_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown Predict Into Future ###Code dates_range = list(range(0, 72)) fh_long = ForecastingHorizon(values=dates_range, is_relative=True) forecaster.fit(y) y_pred = forecaster.predict(fh_long) plot_series(y, y_pred, labels=["y_train", "y_pred"]) ###Output _____no_output_____ ###Markdown AutoARIMAGoing through all that work to find ARIMA terms seems suboptimal, and it is. We can use AutoARIMA to do a grid-search-ish process to find the ARIMA values for us. We supply the sp=12 for the seasonality pattern. Try without it, or with something different and observe. ###Code from sktime.forecasting.arima import AutoARIMA forecaster = AutoARIMA(sp=12, suppress_warnings=True) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) print(mean_absolute_percentage_error(y_pred, y_test)) print(forecaster.get_fitted_params()) ###Output 0.041170623702305884 {'ar.L1': -0.24111779230017605, 'sigma2': 92.74986650446229, 'order': (1, 1, 0), 'seasonal_order': (0, 1, 0, 12), 'aic': 704.0011679023331, 'aicc': 704.1316026849419, 'bic': 709.1089216855343, 'hqic': 706.0650836393346} ###Markdown Automated Tools and PipelinesSince sktime is structured like sklearn, we can incorporate things into standard functionality like pipelines. The sktime library provides a TransformedTargetForecaster that we can use as a pipeline - the reason for this difference is because the time series data is the target data, not the feature set like a normal pipeline. There are also a few other automated tools that we won't explore in detail, but are clearly named and explained in the documentation: Detrender - remove trends from time series. Deseasonalizer - remove seasonality from time series. Both transform the time series data to remove the non-stationary bits. ###Code from sktime.forecasting.trend import PolynomialTrendForecaster from sktime.transformations.series.detrend import Detrender from sktime.forecasting.compose import TransformedTargetForecaster from sktime.transformations.series.detrend import Deseasonalizer forecaster = TransformedTargetForecaster( [ ("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)), ("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))), ("forecast", AutoARIMA()), ]) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_absolute_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown GridSearchWe can also use a forecasting grid search to test for the best parameters, just like normal. The customizations here are: The crossvalidation is provided by the SlidingWindowSplitter, which will slice a time series into windows for tests. The OptionalPassthrough allows the True/False inclusion in the cv, so we can test if things should be included or not. ###Code from sktime.forecasting.compose import TransformedTargetForecaster from sktime.forecasting.model_selection import ForecastingGridSearchCV, SlidingWindowSplitter from sktime.transformations.series.compose import OptionalPassthrough from sktime.transformations.series.detrend import Deseasonalizer # create pipeline pipe = TransformedTargetForecaster( steps=[ ("deseasonalizer", OptionalPassthrough(Deseasonalizer())), ("forecaster", ExponentialSmoothing()), ]) # putting it all together in a grid search cv = SlidingWindowSplitter(initial_window=36, window_length=24, start_with_window=True, step_length=24) param_grid = { "deseasonalizer__passthrough": [True, False], "forecaster__sp": [2,3,4,5,6,7,8,9,10,11,12], "forecaster__trend": ["add", "mul"], "forecaster__seasonal": ["add", "mul"] } gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1) gscv.fit(y_train) y_pred = gscv.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) print(mean_squared_percentage_error(y_pred, y_test)) print(gscv.best_params_) ###Output 0.0033508158084145203 {'deseasonalizer__passthrough': True, 'forecaster__seasonal': 'mul', 'forecaster__sp': 12, 'forecaster__trend': 'add'} ###Markdown FaceBook ProphetOne different thing that we can do with sktime is that we can import a pre-trained model and use it - in this case something offered by Facebook called Prophet. This package is a more sophisticated model for time series predictions created by Facebook. We can look to the documentation for details. ###Code from sktime.forecasting.fbprophet import prophet # Convert index to pd.DatetimeIndex z = y.copy() z = z.to_timestamp(freq="M") z_train, z_test = temporal_train_test_split(z, test_size=36) forecaster = Prophet( seasonality_mode="multiplicative", n_changepoints=int(len(y_train) / 12), add_country_holidays={"country_name": "Canada"}, yearly_seasonality=True, weekly_seasonality=False, daily_seasonality=False, ) forecaster.fit(z_train) y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1])) y_pred.index = y_test.index plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_absolute_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown Working Example ###Code from sktime.datasets import load_shampoo_sales sh = load_shampoo_sales() print(len(sh)) plot_series(sh) ###Output 36 ###Markdown Split Exponential Smoothing ARIMA ###Code # d - value. # ACF/PACF plot #ARIMA ###Output _____no_output_____ ###Markdown Time Series Forecasting with SKtimeSKtime provides a familiar interface to time series functionality. Common Lingo forecaster = model ForecasterHorizon = what to create predictions for InstallYou'll need to install the library sktime for this to work, so:pip install sktimeconda install sktimeConda UI to installYou'll also need to instal separate packages for ARIMA - pmdarima, and for prophet - prophet. One thing we can try here is to copy an environment, just in case we break anything. We can clone an environment using: conda create --name tmp_test --clone ml3950 where ml3950 is the current environment, and tmp_test is the new oneThis is a good chance to try it before installing. Load Airline Data ###Code y = load_airline() plot_series(y) ###Output _____no_output_____ ###Markdown Train-Test SplitSince time series runs sequentially, the train-test split for error calcualtions is normally just chopping off the last part of the sequence and use that to test. Sktime provides a dedicated function to do so. You'll commonly see array notation to slice before/after as well. Here we take the final 36 months for testing. We also make a forecast horizion, this one is set to be the months of the test data, since we are evaluating accuracy. ###Code y_train, y_test = temporal_train_test_split(y, test_size=36) fh = ForecastingHorizon(y_test.index, is_relative=False) ###Output _____no_output_____ ###Markdown Exponential Smoothing. We can guess that the period is 12 months since it looks like a yearly pattern. We can also try to capture the trend and the seasonality. Additive or MultiplicitiveRule of thumb: if the difference is changing over time -> multiplicitive, if it is constant -> additive. Here the size of the seasonal swings seems to be getting larger, so that is multiplicitive. The trend seems to be a constant increase, so additive. We can see how to test these later - it is not always obvious. ###Code from sktime.forecasting.exp_smoothing import ExponentialSmoothing forecaster = ExponentialSmoothing(trend="add", seasonal="mul", sp=12) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_squared_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown Results - it looks like our model was pretty good. What if we want to make predictions into the future? We need to modify the forecasting horizion to, well, look into the horizon, then it is pretty similar. We can give the month indicies for the future months, as well as an argument "is_relative" that will tell sktiem to pick up at the end. We can also retrain the model to use all the data, since we are done evaluating the model here. ###Code # Next 6 years dates_range = list(range(0, 72)) fh_long = ForecastingHorizon(values=dates_range, is_relative=True) forecaster.fit(y) y_pred = forecaster.predict(fh_long) plot_series(y, y_pred, labels=["y_train", "y_pred"]) ###Output _____no_output_____ ###Markdown ARIMAWe can try a similar approach with an ARIMA model. ###Code from sktime.forecasting.arima import ARIMA ###Output _____no_output_____ ###Markdown AD Fuller Test for D TermThe number of .diff()s in the code is the number of differences that we are introducing. Having one yeilded a value very close to .05, so we can try that for D. ###Code from statsmodels.tsa.stattools import adfuller result = adfuller(y.diff().dropna()) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) ###Output ADF Statistic: -2.829267 p-value: 0.054213 ###Markdown Partial Autocorrelation and Partial Autocorrelation for MA and AR. Determining p and q is a little more haphazard. Below is a processes to look for them with the ACF and PACF charts. In short, we can look for a starting point, then potentially adjust. We will have a solution for this soon...Process:If the PACF of the differenced series shows a sharp cut off and/or the lag1 autocorrelation is positive (this indicates an ‘under- differenced’ series) while the ACF decays more slowly , then consider adding an AR term to the model. The number of AR terms will depend on the lag at which PACF cuts off.If the ACF of the differenced series shows a sharp cut off and/or the lag1 autocorrelation is negative (this indicates an ‘over- differenced’ series) while the PACF decays more slowly , then consider adding MA term to the model. Here, the autocorrelation pattern is explained more by adding the MA terms. The number of MA terms will depend on the lag at which ACF cuts off.An AR term is associated with under-differencing or positive auto correlation at lag 1while an MA term is associated with over-differencing or negative auto correlation at lag 1. ###Code from statsmodels.graphics.tsaplots import plot_acf, plot_pacf # ACF/PACF plot of 1st differenced series fig, axes = plt.subplots(1, 2) plot_acf(y.diff().dropna(), ax=axes[0]) plot_pacf(y.diff().dropna(), ax=axes[1]) plt.show() ###Output _____no_output_____ ###Markdown We can try: AR (p) - 1 I (d) - 1 MA (q) - 1 SeasonalityWe can figure out the same things for the seasonal trend.Seasonality - we can guess pretty easily that it is a one year pattern, so we can include that as m. The seasonal_order attributes are: P: Seasonal autoregressive order D: Seasonal difference order Q: Seasonal moving average order m: The number of time steps for a single seasonal period Check DAD Fuller test. p Value is small, no differencing needed. ###Code result = adfuller((y-y.shift(12)).dropna()) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) ###Output ADF Statistic: -3.383021 p-value: 0.011551 ###Markdown Check P and Q ###Code fig, axes = plt.subplots(1, 2) plot_acf((y-y.shift(12)).dropna(), ax=axes[0]) plot_pacf((y-y.shift(12)).dropna(), ax=axes[1]) plt.show() forecaster = ARIMA(order=(1, 1, 1), seasonal_order=(2, 0, 0, 12), suppress_warnings=True) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_squared_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown Predict Into Future ###Code dates_range = list(range(0, 72)) fh_long = ForecastingHorizon(values=dates_range, is_relative=True) forecaster.fit(y) y_pred = forecaster.predict(fh_long) plot_series(y, y_pred, labels=["y_train", "y_pred"]) ###Output _____no_output_____ ###Markdown AutoARIMAGoing through all that work to find ARIMA terms seems suboptimal, and it is. We can use AutoARIMA to do a grid-search-ish process to find the ARIMA values for us. We supply the sp=12 for the seasonality pattern. Try without it, or with something different and observe. ###Code from sktime.forecasting.arima import AutoARIMA forecaster = AutoARIMA(sp=12, suppress_warnings=True) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) print(mean_absolute_percentage_error(y_pred, y_test)) print(forecaster.get_fitted_params()) ###Output 0.04117062369995542 {'ar.L1': -0.24111778986083574, 'sigma2': 92.74986459318954, 'order': (1, 1, 0), 'seasonal_order': (0, 1, 0, 12), 'aic': 704.0011679023804, 'aicc': 704.1316026849892, 'bic': 709.1089216855815, 'hqic': 706.0650836393819} ###Markdown Automated Tools and PipelinesSince sktime is structured like sklearn, we can incorporate things into standard functionality like pipelines. The sktime library provides a TransformedTargetForecaster that we can use as a pipeline - the reason for this difference is because the time series data is the target data, not the feature set like a normal pipeline. There are also a few other automated tools that we won't explore in detail, but are clearly named and explained in the documentation: Detrender - remove trends from time series. Deseasonalizer - remove seasonality from time series. Both transform the time series data to remove the non-stationary bits. ###Code from sktime.forecasting.trend import PolynomialTrendForecaster from sktime.transformations.series.detrend import Detrender from sktime.forecasting.compose import TransformedTargetForecaster from sktime.transformations.series.detrend import Deseasonalizer forecaster = TransformedTargetForecaster( [ ("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)), ("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))), ("forecast", AutoARIMA()), ]) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_absolute_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown GridSearchWe can also use a forecasting grid search to test for the best parameters, just like normal. The customizations here are: The crossvalidation is provided by the SlidingWindowSplitter, which will slice a time series into windows for tests. The OptionalPassthrough allows the True/False inclusion in the cv, so we can test if things should be included or not. ###Code from sktime.forecasting.compose import TransformedTargetForecaster from sktime.forecasting.model_selection import ForecastingGridSearchCV, SlidingWindowSplitter from sktime.transformations.series.compose import OptionalPassthrough from sktime.transformations.series.detrend import Deseasonalizer # create pipeline pipe = TransformedTargetForecaster( steps=[ ("deseasonalizer", OptionalPassthrough(Deseasonalizer())), ("forecaster", ExponentialSmoothing()), ]) # putting it all together in a grid search cv = SlidingWindowSplitter(initial_window=36, window_length=24, start_with_window=True, step_length=24) param_grid = { "deseasonalizer__passthrough": [True, False], "forecaster__sp": [2,3,4,5,6,7,8,9,10,11,12], "forecaster__trend": ["add", "mul"], "forecaster__seasonal": ["add", "mul"] } gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1) gscv.fit(y_train) y_pred = gscv.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) print(mean_squared_percentage_error(y_pred, y_test)) print(gscv.best_params_) ###Output 0.003447091573469646 {'deseasonalizer__passthrough': True, 'forecaster__seasonal': 'mul', 'forecaster__sp': 12, 'forecaster__trend': 'add'} ###Markdown FaceBook ProphetOne different thing that we can do with sktime is that we can import a pre-trained model and use it - in this case something offered by Facebook called Prophet. This package is a more sophisticated model for time series predictions created by Facebook. We can look to the documentation for details. ###Code from sktime.forecasting.fbprophet import Prophet # Convert index to pd.DatetimeIndex z = y.copy() z = z.to_timestamp(freq="M") z_train, z_test = temporal_train_test_split(z, test_size=36) forecaster = Prophet( seasonality_mode="multiplicative", n_changepoints=int(len(y_train) / 12), add_country_holidays={"country_name": "Canada"}, yearly_seasonality=True, weekly_seasonality=False, daily_seasonality=False, ) forecaster.fit(z_train) y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1])) y_pred.index = y_test.index plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_absolute_percentage_error(y_pred, y_test) ###Output /Users/akeems/opt/anaconda3/envs/tmp_test/lib/python3.9/site-packages/sktime/forecasting/base/_fh.py:337: FutureWarning: Timestamp.freqstr is deprecated and will be removed in a future version. cutoff = _coerce_to_period(cutoff, freq=cutoff.freqstr) ###Markdown Working Example ###Code from sktime.datasets import load_shampoo_sales sh = load_shampoo_sales() print(len(sh)) plot_series(sh) ###Output 36 ###Markdown Split Exponential Smoothing ARIMA ###Code # d - value. # ACF/PACF plot #ARIMA ###Output _____no_output_____ ###Markdown Time Series Forecasting with SKtimeSKtime provides a familiar interface to time series functionality. Common Lingo forecaster = model ForecasterHorizon = what to create predictions for InstallYou'll need to install the library sktime for this to work, so:pip install sktimeconda install sktimeConda UI to installYou'll also need to instal separate packages for ARIMA - pmdarima, and for prophet - prophet. One thing we can try here is to copy an environment, just in case we break anything. We can clone an environment using: conda create --name tmp_test --clone ml3950 where ml3950 is the current environment, and tmp_test is the new oneThis is a good chance to try it before installing. Load Airline Data ###Code y = load_airline() plot_series(y) ###Output _____no_output_____ ###Markdown Train-Test SplitSince time series runs sequentially, the train-test split for error calcualtions is normally just chopping off the last part of the sequence and use that to test. Sktime provides a dedicated function to do so. You'll commonly see array notation to slice before/after as well. Here we take the final 36 months for testing. We also make a forecast horizion, this one is set to be the months of the test data, since we are evaluating accuracy. ###Code y_train, y_test = temporal_train_test_split(y, test_size=36) fh = ForecastingHorizon(y_test.index, is_relative=False) ###Output _____no_output_____ ###Markdown Exponential Smoothing. We can guess that the period is 12 months since it looks like a yearly pattern. We can also try to capture the trend and the seasonality. Additive or MultiplicitiveRule of thumb: if the difference is changing over time -> multiplicitive, if it is constant -> additive. Here the size of the seasonal swings seems to be getting larger, so that is multiplicitive. The trend seems to be a constant increase, so additive. We can see how to test these later - it is not always obvious. ###Code from sktime.forecasting.exp_smoothing import ExponentialSmoothing forecaster = ExponentialSmoothing(trend="add", seasonal="mul", sp=12) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_squared_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown Results - it looks like our model was pretty good. What if we want to make predictions into the future? We need to modify the forecasting horizion to, well, look into the horizon, then it is pretty similar. We can give the month indicies for the future months, as well as an argument "is_relative" that will tell sktiem to pick up at the end. We can also retrain the model to use all the data, since we are done evaluating the model here. ###Code # Next 6 years dates_range = list(range(0, 72)) fh_long = ForecastingHorizon(values=dates_range, is_relative=True) forecaster.fit(y) y_pred = forecaster.predict(fh_long) plot_series(y, y_pred, labels=["y_train", "y_pred"]) ###Output _____no_output_____ ###Markdown ARIMAWe can try a similar approach with an ARIMA model. ###Code from sktime.forecasting.arima import ARIMA ###Output _____no_output_____ ###Markdown AD Fuller Test for D TermThe number of .diff()s in the code is the number of differences that we are introducing. Having one yeilded a value very close to .05, so we can try that for D. ###Code from statsmodels.tsa.stattools import adfuller result = adfuller(y.diff().dropna()) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) ###Output ADF Statistic: -2.829267 p-value: 0.054213 ###Markdown Partial Autocorrelation and Partial Autocorrelation for MA and AR. Determining p and q is a little more haphazard. Below is a processes to look for them with the ACF and PACF charts. In short, we can look for a starting point, then potentially adjust. We will have a solution for this soon...Process:If the PACF of the differenced series shows a sharp cut off and/or the lag1 autocorrelation is positive (this indicates an ‘under- differenced’ series) while the ACF decays more slowly , then consider adding an AR term to the model. The number of AR terms will depend on the lag at which PACF cuts off.If the ACF of the differenced series shows a sharp cut off and/or the lag1 autocorrelation is negative (this indicates an ‘over- differenced’ series) while the PACF decays more slowly , then consider adding MA term to the model. Here, the autocorrelation pattern is explained more by adding the MA terms. The number of MA terms will depend on the lag at which ACF cuts off.An AR term is associated with under-differencing or positive auto correlation at lag 1while an MA term is associated with over-differencing or negative auto correlation at lag 1. ###Code from statsmodels.graphics.tsaplots import plot_acf, plot_pacf # ACF/PACF plot of 1st differenced series fig, axes = plt.subplots(1, 2) plot_acf(y.diff().dropna(), ax=axes[0]) plot_pacf(y.diff().dropna(), ax=axes[1]) plt.show() ###Output _____no_output_____ ###Markdown We can try: AR (p) - 1 I (d) - 1 MA (q) - 1 SeasonalityWe can figure out the same things for the seasonal trend.Seasonality - we can guess pretty easily that it is a one year pattern, so we can include that as m. The seasonal_order attributes are: P: Seasonal autoregressive order D: Seasonal difference order Q: Seasonal moving average order m: The number of time steps for a single seasonal period Check DAD Fuller test. p Value is small, no differencing needed. ###Code result = adfuller((y-y.shift(12)).dropna()) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) ###Output ADF Statistic: -3.383021 p-value: 0.011551 ###Markdown Check P and Q ###Code fig, axes = plt.subplots(1, 2) plot_acf((y-y.shift(12)).dropna(), ax=axes[0]) plot_pacf((y-y.shift(12)).dropna(), ax=axes[1]) plt.show() forecaster = ARIMA(order=(1, 1, 1), seasonal_order=(2, 0, 0, 12), suppress_warnings=True) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_squared_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown Predict Into Future ###Code dates_range = list(range(0, 72)) fh_long = ForecastingHorizon(values=dates_range, is_relative=True) forecaster.fit(y) y_pred = forecaster.predict(fh_long) plot_series(y, y_pred, labels=["y_train", "y_pred"]) ###Output _____no_output_____ ###Markdown AutoARIMAGoing through all that work to find ARIMA terms seems suboptimal, and it is. We can use AutoARIMA to do a grid-search-ish process to find the ARIMA values for us. We supply the sp=12 for the seasonality pattern. Try without it, or with something different and observe. ###Code from sktime.forecasting.arima import AutoARIMA forecaster = AutoARIMA(sp=12, suppress_warnings=True) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) print(mean_absolute_percentage_error(y_pred, y_test)) print(forecaster.get_fitted_params()) ###Output 0.04117062369995542 {'ar.L1': -0.24111778986083574, 'sigma2': 92.74986459318954, 'order': (1, 1, 0), 'seasonal_order': (0, 1, 0, 12), 'aic': 704.0011679023804, 'aicc': 704.1316026849892, 'bic': 709.1089216855815, 'hqic': 706.0650836393819} ###Markdown Automated Tools and PipelinesSince sktime is structured like sklearn, we can incorporate things into standard functionality like pipelines. The sktime library provides a TransformedTargetForecaster that we can use as a pipeline - the reason for this difference is because the time series data is the target data, not the feature set like a normal pipeline. There are also a few other automated tools that we won't explore in detail, but are clearly named and explained in the documentation: Detrender - remove trends from time series. Deseasonalizer - remove seasonality from time series. Both transform the time series data to remove the non-stationary bits. ###Code from sktime.forecasting.trend import PolynomialTrendForecaster from sktime.transformations.series.detrend import Detrender from sktime.forecasting.compose import TransformedTargetForecaster from sktime.transformations.series.detrend import Deseasonalizer forecaster = TransformedTargetForecaster( [ ("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)), ("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))), ("forecast", AutoARIMA()), ]) forecaster.fit(y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_absolute_percentage_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown GridSearchWe can also use a forecasting grid search to test for the best parameters, just like normal. The customizations here are: The crossvalidation is provided by the SlidingWindowSplitter, which will slice a time series into windows for tests. The OptionalPassthrough allows the True/False inclusion in the cv, so we can test if things should be included or not. ###Code from sktime.forecasting.compose import TransformedTargetForecaster from sktime.forecasting.model_selection import ForecastingGridSearchCV, SlidingWindowSplitter from sktime.transformations.series.compose import OptionalPassthrough from sktime.transformations.series.detrend import Deseasonalizer # create pipeline pipe = TransformedTargetForecaster( steps=[ ("deseasonalizer", OptionalPassthrough(Deseasonalizer())), ("forecaster", ExponentialSmoothing()), ]) # putting it all together in a grid search cv = SlidingWindowSplitter(initial_window=36, window_length=24, start_with_window=True, step_length=24) param_grid = { "deseasonalizer__passthrough": [True, False], "forecaster__sp": [2,3,4,5,6,7,8,9,10,11,12], "forecaster__trend": ["add", "mul"], "forecaster__seasonal": ["add", "mul"] } gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1) gscv.fit(y_train) y_pred = gscv.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) print(mean_squared_percentage_error(y_pred, y_test)) print(gscv.best_params_) ###Output 0.003447091573469646 {'deseasonalizer__passthrough': True, 'forecaster__seasonal': 'mul', 'forecaster__sp': 12, 'forecaster__trend': 'add'} ###Markdown FaceBook ProphetOne different thing that we can do with sktime is that we can import a pre-trained model and use it - in this case something offered by Facebook called Prophet. This package is a more sophisticated model for time series predictions created by Facebook. We can look to the documentation for details. ###Code from sktime.forecasting.fbprophet import Prophet # Convert index to pd.DatetimeIndex z = y.copy() z = z.to_timestamp(freq="M") z_train, z_test = temporal_train_test_split(z, test_size=36) forecaster = Prophet( seasonality_mode="multiplicative", n_changepoints=int(len(y_train) / 12), add_country_holidays={"country_name": "Canada"}, yearly_seasonality=True, weekly_seasonality=False, daily_seasonality=False, ) forecaster.fit(z_train) y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1])) y_pred.index = y_test.index plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_absolute_percentage_error(y_pred, y_test) ###Output /Users/akeems/opt/anaconda3/envs/tmp_test/lib/python3.9/site-packages/sktime/forecasting/base/_fh.py:337: FutureWarning: Timestamp.freqstr is deprecated and will be removed in a future version. cutoff = _coerce_to_period(cutoff, freq=cutoff.freqstr) ###Markdown Working Example ###Code from sktime.datasets import load_shampoo_sales sh = load_shampoo_sales() print(len(sh)) plot_series(sh) ###Output 36 ###Markdown Split Exponential Smoothing ARIMA ###Code # d - value. # ACF/PACF plot #ARIMA ###Output _____no_output_____
EHR_Claims/Lasso/.ipynb_checkpoints/EHR_C_Hemorrhage_FAMD-checkpoint.ipynb
###Markdown Template LR ###Code def lr(X_train, y_train): from sklearn.linear_model import Lasso from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV from imblearn.over_sampling import SMOTE from sklearn.preprocessing import StandardScaler model = LogisticRegression(penalty = 'l1', solver = 'liblinear') param_grid = [ {'C' : np.logspace(-4, 4, 20)} ] clf = GridSearchCV(model, param_grid, cv = 5, verbose = True, n_jobs = -1) best_clf = clf.fit(X_train, y_train) return best_clf def train_scores(X_train,y_train): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss pred = best_clf.predict(X_train) actual = y_train print(accuracy_score(actual,pred)) print(f1_score(actual,pred)) print(fbeta_score(actual,pred, average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.decision_function(X_train))) print(log_loss(actual,pred)) def test_scores(X_test,y_test): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss pred = best_clf.predict(X_test) actual = y_test print(accuracy_score(actual,pred)) print(f1_score(actual,pred)) print(fbeta_score(actual,pred, average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.decision_function(X_test))) print(log_loss(actual,pred)) ###Output _____no_output_____ ###Markdown FAMD Transformation ###Code from prince import FAMD famd = FAMD(n_components = 15, n_iter = 3, random_state = 101) for (colName, colData) in co_train_gpop.iteritems(): if (colName != 'Co_N_Drugs_RC0' and colName!= 'Co_N_Hosp_RC0' and colName != 'Co_Total_HospLOS_RC0' and colName != 'Co_N_MDVisit_RC0'): co_train_gpop[colName].replace((1,0) ,('yes','no'), inplace = True) co_train_low[colName].replace((1,0) ,('yes','no'), inplace = True) co_train_high[colName].replace((1,0) ,('yes','no'), inplace = True) co_validation_gpop[colName].replace((1,0), ('yes','no'), inplace = True) co_validation_high[colName].replace((1,0), ('yes','no'), inplace = True) co_validation_low[colName].replace((1,0), ('yes','no'), inplace = True) famd.fit(co_train_gpop) co_train_gpop_FAMD = famd.transform(co_train_gpop) famd.fit(co_train_high) co_train_high_FAMD = famd.transform(co_train_high) famd.fit(co_train_low) co_train_low_FAMD = famd.transform(co_train_low) famd.fit(co_validation_gpop) co_validation_gpop_FAMD = famd.transform(co_validation_gpop) famd.fit(co_validation_high) co_validation_high_FAMD = famd.transform(co_validation_high) famd.fit(co_validation_low) co_validation_low_FAMD = famd.transform(co_validation_low) ###Output /PHShome/se197/anaconda3/lib/python3.8/site-packages/pandas/core/series.py:4509: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().replace( ###Markdown General Population ###Code best_clf = lr(co_train_gpop_FAMD, out_train_hemorrhage_gpop) train_scores(co_train_gpop_FAMD, out_train_hemorrhage_gpop) print() test_scores(co_validation_gpop_FAMD, out_validation_hemorrhage_gpop) comb = [] for i in range(len(predictor_variable_claims)): comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) comb ###Output Fitting 5 folds for each of 20 candidates, totalling 100 fits 0.920238353868595 0.0 0.49148019227581635 0.6940932038564851 2.75486966062259 0.9069605105704028 0.0 0.48994785381830713 0.6740192101121485 3.213470121305512 ###Markdown High Continuity ###Code best_clf = lr(co_train_high_FAMD, out_train_hemorrhage_high) train_scores(co_train_high_FAMD, out_train_hemorrhage_high) print() test_scores(co_validation_high_FAMD, out_validation_hemorrhage_high) comb = [] for i in range(len(predictor_variable_claims)): comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) comb ###Output Fitting 5 folds for each of 20 candidates, totalling 100 fits 0.9300286327845383 0.0 0.49258795875037914 0.7219670036099148 2.416725406301017 0.9207558989350595 0.0 0.49153921612342266 0.7171678826751618 2.7369942872976836 ###Markdown Low Continuity ###Code best_clf = lr(co_train_low_FAMD, out_train_hemorrhage_low) train_scores(co_train_low_FAMD, out_train_hemorrhage_low) print() test_scores(co_validation_low_FAMD, out_validation_hemorrhage_low) comb = [] for i in range(len(predictor_variable_claims)): comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) comb ###Output Fitting 5 folds for each of 20 candidates, totalling 100 fits 0.9096202367859222 0.0 0.4902576118945004 0.6694943747575326 3.1216064322760064 0.8943500668066425 0.0 0.48845961386097325 0.6225383293921155 3.6490194187026272
module1-decision-trees/LS_DS_221_assignment.ipynb
###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 # from pandas_profiling import ProfileReport # profile = ProfileReport(train, minimal=True).to_notebook_iframe() # profile ###Output _____no_output_____ ###Markdown FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:amount_tsh : Total static head (amount water available to waterpoint)date_recorded : The date the row was enteredfunder : Who funded the wellgps_height : Altitude of the wellinstaller : Organization that installed the welllongitude : GPS coordinatelatitude : GPS coordinatewpt_name : Name of the waterpoint if there is onenum_private :basin : Geographic water basinsubvillage : Geographic locationregion : Geographic locationregion_code : Geographic location (coded)district_code : Geographic location (coded)lga : Geographic locationward : Geographic locationpopulation : Population around the wellpublic_meeting : True/Falserecorded_by : Group entering this row of datascheme_management : Who operates the waterpointscheme_name : Who operates the waterpointpermit : If the waterpoint is permittedconstruction_year : Year the waterpoint was constructedextraction_type : The kind of extraction the waterpoint usesextraction_type_group : The kind of extraction the waterpoint usesextraction_type_class : The kind of extraction the waterpoint usesmanagement : How the waterpoint is managedmanagement_group : How the waterpoint is managedpayment : What the water costspayment_type : What the water costswater_quality : The quality of the waterquality_group : The quality of the waterquantity : The quantity of waterquantity_group : The quantity of watersource : The source of the watersource_type : The source of the watersource_class : The source of the waterwaterpoint_type : The kind of waterpointwaterpoint_type_group : The kind of waterpoint ###Code import numpy as np from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline train.columns cols_to_keep = ['id','funder', 'installer', 'basin', 'subvillage', 'region', 'population', 'permit', 'extraction_type', 'extraction_type_group', 'extraction_type_class', 'management', 'management_group', 'payment', 'quality_group','quantity_group', 'source', 'source_class', 'waterpoint_type_group', 'status_group'] #gps_height (median impute? or drop?) 'longitude', 'latitude', ###Output _____no_output_____ ###Markdown I dropped 18 columns in total ###Code test_train_cols_to_keep = cols_to_keep[1:] test_id_col=test.id train = train[test_train_cols_to_keep] test_train_cols_to_keep.pop() test = test[test_train_cols_to_keep] # maintaining the same dimensions for train and test sets test train.shape, test.shape train.shape, test.shape #1812 rows removed train.funder.value_counts() train.funder = train.funder.fillna("Government Of Tanzania") train.installer = train.installer.fillna("DWE") train.subvillage = train.subvillage.fillna("Majengo") train.funder.value_counts() # def reduce_categories(df, list_of_series, list_of_thresholds): # a=[] # for i in range(len(list_of_series)): # series = df[list_of_series[i]] # series_frequencies = series.value_counts(normalize=True) # threshold = list_of_thresholds[i] # smaller_categories = series_frequencies[series_frequencies<threshold].index # reduced_series = df[series].replace(smaller_categories, "Other") # a.append(reduced_series) # return a # reduce_categories(train, ['funder', 'installer', 'subvillage'], [.01,.01,.001]) funder_frequencies = train.funder.value_counts(normalize=True) # < .01 installer_frequencies = train.installer.value_counts(normalize=True) # < .01 subvillage_frequencies = train.subvillage.value_counts(normalize=True) # < .001 funder_small_categories = funder_frequencies[funder_frequencies < 0.01].index #(returns list of relevant row names) installer_small_categories = installer_frequencies[installer_frequencies < 0.01].index #(returns list of relevant row names) subvillage_small_categories = subvillage_frequencies[subvillage_frequencies < 0.001].index #(returns list of relevant row names) train.funder = train.funder.replace(funder_small_categories, "Other") train.installer = train.installer.replace(installer_small_categories, "Other") train.subvillage = train.subvillage.replace(subvillage_small_categories, "Other") train.population.median() ((train.population==0).sum())/train.shape[0] median_population = train.population.median() train.population = train.population.replace(0, median_population) test.population = test.population.replace(0, median_population) train.permit = train.permit.fillna(False) test.permit = test.permit.fillna(False) train.source_class.value_counts(normalize=True) train.source_class = train.source_class.replace("unknown", "groundwater") test.source_class = test.source_class.replace("unknown", "groundwater") train.source_class.value_counts() ###Output _____no_output_____ ###Markdown Come back to this later ###Code # train.construction_year.value_counts(normalize=True) # numeric_train = train.select_dtypes(include="number") # numeric_test = test.select_dtypes(include="number") # non_numeric_train = train.select_dtypes(exclude="number") # non_numeric_test = test.select_dtypes(exclude="number") #(train.shape[1] == (non_numeric_train.shape[1]+numeric_train.shape[1])) #(test.shape[1] == (non_numeric_test.shape[1]+numeric_test.shape[1])) y = train.status_group X = train.drop(y.name, axis=1) from sklearn.model_selection import train_test_split X_train, X_validate, y_train, y_validate = train_test_split(X,y,test_size=0.2, random_state=99) ###Output _____no_output_____ ###Markdown Test for "stratify = y" later ###Code #!pip install git+https://github.com/MaxHalford/Prince #from prince import MCA #from sklearn.linear_model import LogisticRegressionCV train.shape #!pip install catboost from category_encoders import OneHotEncoder, CatBoostEncoder, OrdinalEncoder from sklearn.preprocessing import MinMaxScaler #from catboost import CatBoostClassifier # X_latitude_positive = X_train.latitude.apply(abs) # X_longitude_positive = X_train.longitude.apply(abs) # X_train_positive = X_train.copy() # X_train_positive.latitude = X_latitude_positive # X_train_positive.longitude = X_longitude_positive # mca = MCA() # mca.fit(X_train_positive.select_dtypes(exclude="number")) # from catboost import Pool # train_data = X_train # eval_data = y_train # cat_features = X_train.select_dtypes(include="object").columns.to_list() # train_dataset = Pool(data=X_train, # label=y_train, # cat_features=cat_features) # eval_dataset = Pool(data=X_validate, # label=y_validate, # cat_features=cat_features) # # Initialize CatBoostClassifier # model = CatBoostClassifier(iterations=10, # learning_rate=.5, # depth=16, # loss_function='MultiClass') # # Fit model # model.fit(train_dataset) # # Get predicted classes # preds_class = model.predict(eval_dataset) # # Get predicted probabilities for each class # preds_proba = model.predict_proba(eval_dataset) # # Get predicted RawFormulaVal # preds_raw = model.predict(eval_dataset, # prediction_type='RawFormulaVal') #model.score(X_validate,y_validate) #pd.DataFrame(preds_class)[0].unique() #pd.DataFrame(preds_proba, columns=['functional', 'non functional', 'functional needs repair']) pipeline = make_pipeline(OrdinalEncoder(), RandomForestClassifier(max_depth=50)) pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_validate, y_validate)) pipeline.predict(test) Combined_X = X_train.append(X_validate) Combined_y = y_train.append(y_validate) # from sklearn.model_selection import RandomizedSearchCV # # Number of trees in random forest # n_estimators = [int(x) for x in np.linspace(start = 100, stop = 800, num = 10)] # # Number of features to consider at every split # max_features = ['auto', 'sqrt'] # # Maximum number of levels in tree # max_depth = [int(x) for x in np.linspace(10, 110, num = 11)] # max_depth.append(None) # # Minimum number of samples required to split a node # min_samples_split = [3, 5, 10] # # Minimum number of samples required at each leaf node # min_samples_leaf = [1, 2, 4] # # Method of selecting samples for training each tree # bootstrap = [True, False] # # Create the random grid # random_grid = {'n_estimators': n_estimators, # 'max_features': max_features, # 'max_depth': max_depth, # 'min_samples_split': min_samples_split, # 'min_samples_leaf': min_samples_leaf, # 'bootstrap': bootstrap} # rf = RandomForestClassifier() # # Random search of parameters, using 3 fold cross validation, # # search across 100 different combinations, and use all available cores # rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 6, cv = 4, verbose=2, random_state=42, n_jobs = -1) # # Fit the random search model # pipeline2 = make_pipeline(OrdinalEncoder(), # rf_random) # pipeline2.fit(Combined_X, Combined_y) # y_pred = pipeline2.predict(test) # pd.DataFrame(data={"id":test_id_col,"status_group":y_pred}).to_csv("water_pred.csv", index=False) # pd.read_csv('water_pred.csv') #from google.colab import output #output.eval_js('new Audio("https://upload.wikimedia.org/wikipedia/commons/0/05/Beep-09.ogg").play()') #output.eval_js('new Audio("https://upload.wikimedia.org/wikipedia/commons/0/05/Beep-09.ogg").play()') # import lightgbm import xgboost as xgb # ohe = OneHotEncoder(use_cat_names=True) # encoded_X = ohe.fit_transform(X) # encoded_y = y.replace({'functional': 3, 'non functional': 2, 'functional needs repair': 1}) # XG_train, XG_validate, yG_train, yG_validate = train_test_split(encoded_X,encoded_y, test_size=.2, random_state=99) # encoded_y.unique() # model1 = xgb.XGBClassifier() # model2 = xgb.XGBClassifier(n_estimators=200, max_depth=12, learning_rate=0.3, subsample=0.5) # train_model1 = model1.fit(XG_train, yG_train) # train_model2 = model2.fit(XG_train, yG_train) #{3:'functional': 2:'non functional', 1: 'functional needs repair'} # pred1 = train_model1.predict(XG_validate) # pred2 = train_model2.predict(XG_validate) from sklearn.metrics import accuracy_score # accuracy_score(yG_validate, pred1) # accuracy_score(yG_validate, pred2) ohe2 = OneHotEncoder(use_cat_names=True, handle_unknown="ignore") XG_encoder = ohe2.fit(Combined_X) encoded_train_X = XG_encoder.transform(Combined_X) encoded_Combo_y = Combined_y.replace({'functional': 3, 'non functional': 2, 'functional needs repair': 1}) encoded_test = XG_encoder.transform(test) # model_with_params = xgb.XGBClassifier(n_estimators=200, max_depth=12, learning_rate=0.3, subsample=0.6) # trained_with_params = model_with_params.fit(encoded_train_X, encoded_Combo_y) # XGboost_pred = trained_with_params.predict(encoded_test) # XGboost_pred = pd.Series(XGboost_pred).replace({3: 'functional', 2:'non functional', 1:'functional needs repair'}) # pd.DataFrame(data={"id":test_id_col,"status_group":XGboost_pred}).to_csv("water_pred_xgb.csv", index=False) # pd.read_csv('water_pred_xgb.csv') from sklearn.model_selection import GridSearchCV clf = xgb.XGBClassifier() parameters = { "eta" : [0.05, 0.10, 0.15, 0.20, 0.25, 0.30 ] , "max_depth" : [ 3, 4, 5, 6, 8, 10, 12, 15], "min_child_weight" : [ 1, 3, 5, 7 ], "gamma" : [ 0.0, 0.1, 0.2 , 0.3, 0.4 ], "colsample_bytree" : [ 0.3, 0.4, 0.5 , 0.7 ] } grid = GridSearchCV(clf, parameters, n_jobs=2, scoring="neg_log_loss", cv=3) grid.fit(encoded_train_X, encoded_Combo_y) XGboost_predGV = trained_with_params.predict(encoded_test) XGboost_predGV = pd.Series(XGboost_predGV).replace({3: 'functional', 2:'non functional', 1:'functional needs repair'}) pd.DataFrame(data={"id":test_id_col,"status_group":XGboost_predGV}).to_csv("water_pred_xgbGV.csv", index=False) pd.read_csv('water_pred_xgbGV.csv') ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape from pandas_profiling import ProfileReport #profile = ProfileReport(train, minimal=True).to_notebook_iframe() #profile ###Output /usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm ###Markdown Imports ###Code import numpy as np import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.tree import DecisionTreeClassifier def wrangle(x): df = x.copy() #dropping df.drop(['recorded_by','date_recorded','quantity_group'],axis=1,inplace=True)#dont drop id but i think dont include in model train #replace latitude close to 0 values with 0 df.latitude.replace({-2.000000e-08:0},inplace=True) # replace -s with anans here cols_with_zeros = ['longitude', 'latitude'] for col in cols_with_zeros: df[col] = df[col].replace(0, np.nan) return df train = wrangle(train) test = wrangle(test) ###Output _____no_output_____ ###Markdown this removes the nans and puts other..., what its supposed to do is just lower each categorical columns cardinality ###Code for i in ['funder','installer','wpt_name','subvillage','lga','ward','scheme_name']: top10 = train[i].value_counts()[:10].index train.loc[~train[i].isin(top10), i] = 'OTHER' test.loc[~test[i].isin(top10), i] = 'OTHER' train.isnull().sum() train.shape test.shape test.isnull().sum() print(f"{test.dropna().shape}, {train.dropna().shape}")# i think its ok to drop here #test.dropna(inplace=True)# i cant drop nans in test because i need an answer for each column train.dropna(inplace=True)# i can here but i need to figure out what i wan tto do with the ananas in test test.head(2) X_train,X_val,y_train,y_val = train_test_split(train.dropna().drop('status_group',axis=1),train.dropna().status_group,test_size=.2) pipe_model = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(), StandardScaler(), LogisticRegression(n_jobs=-1) ) # Fit pipe_model.fit(X_train, y_train); print(f"Val Score: {pipe_model.score(X_val,y_val)}") print(f"Train Score: {pipe_model.score(X_train,y_train)}") print(f"Baseline: {max(train.status_group.value_counts(normalize=True))}") pipe_model.predict(test) test.shape test.info() tree_pipe = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(), StandardScaler(), DecisionTreeClassifier() ) tree_pipe.fit(X_train,y_train); print(f"Val Score: {tree_pipe.score(X_val,y_val)}") print(f"Train Score: {tree_pipe.score(X_train,y_train)}") print(f"Baseline: {max(train.status_group.value_counts(normalize=True))}") y_pred = tree_pipe.predict(test) DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('second_try.csv', index=False) from google.colab import files files.download('second_try.csv') test.isnull().sum() import matplotlib.pyplot as plt model = pipe_model.named_steps['logisticregression'] encoder = pipe_model.named_steps['onehotencoder'] encoded_columns = encoder.transform(X_val).columns coefficients = pd.Series(model.coef_[0], encoded_columns) plt.figure(figsize=(10,30)) coefficients.sort_values().plot.barh(color='grey'); y_pred.shape test.shape[0] from sklearn.ensemble import RandomForestClassifier forest_pipe = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(), StandardScaler(), RandomForestClassifier() ) forest_pipe.fit(X_train,y_train); print(f"Val Score: {forest_pipe.score(X_val,y_val)}") print(f"Train Score: {forest_pipe.score(X_train,y_train)}") print(f"Baseline: {max(train.status_group.value_counts(normalize=True))}") sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('third_try.csv', index=False) files.download('third_try.csv') ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [x] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [x] Begin with baselines for classification.- [x] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [x] Get your validation accuracy score.- [x] Get and plot your feature importances.- [x] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [x] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Pandas Profiling can be very slow with medium & large datasets. # These parameters will make it faster. # https://github.com/pandas-profiling/pandas-profiling/issues/222 # import pandas_profiling # profile_report = train.profile_report( # #check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) #profile_report train.describe(include='object') #Divide train data into train/validation from sklearn.model_selection import train_test_split train, val = train_test_split(train, train_size=0.8) train.shape, val.shape import numpy as np def wrangle(X): """Define function to wrangle test, train, validate data in the same way """ #Prevent from SettingWithCopyWarning X=X.copy() #Replace latitude near 0 outside of Tanzania -- treat like zeros X['latitude']=X['latitude'].replace(-2e-08, 0) #When columns have zeros -- that's missing values, replace with np.nan cols_with_zeros=['longitude', 'latitude', 'construction_year'] for col in cols_with_zeros: X[col]=X[col].replace(0,np.nan) #quanity and quantity_group are dupes, drop one: X=X.drop(columns='quantity_group') #return wrangled df return X train=wrangle(train) val=wrangle(val) test=wrangle(test) #select features #Get features with cardinality of non-numeric features non_numeric=train.select_dtypes(include='object').nunique() to_exclude=non_numeric[non_numeric>50].index.to_list to_exclude target = 'status_group' features = train.columns.drop(['id',target,'date_recorded', 'funder', 'installer', 'wpt_name', 'subvillage', 'lga', 'ward', 'scheme_name']) X_train = train[features] X_val = val[features] y_train= train[target] y_val = val[target] X_test= test[features] X_train.shape, X_val.shape #Calculate Test baseline: from sklearn.metrics import accuracy_score majority_class=y_train.mode()[0] y_pred=[majority_class] * len(y_val) accuracy_val = accuracy_score(y_pred, y_val) print(f'Majority class baseline is {majority_class}, and its accuracy is {accuracy_val:.4f}') ###Output Majority class baseline is functional, and its accuracy is 0.5510 ###Markdown Now let's create a model to beat this accuracySelect features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier. ###Code import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), StandardScaler(), LogisticRegression(multi_class='auto', solver='lbfgs',n_jobs=1) ) #Fit on train pipeline.fit(X_train, y_train) #Score on val print('Validation accuracy', pipeline.score(X_val,y_val)) #Predict on test y_pred=pipeline.predict(X_test) #Replace with the decision tree: from sklearn.tree import DecisionTreeClassifier pipeline=make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), DecisionTreeClassifier(random_state=42) ) #Fit on train: pipeline.fit(X_train, y_train) ##Accuracy score on val print('Train Accuracy', pipeline.score(X_train, y_train)) print('Validation Accuracy', pipeline.score(X_val, y_val)) #Predict on test y_pred=pipeline.predict(X_test) #Reduce complexity pipeline=make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), DecisionTreeClassifier(random_state=42, min_samples_leaf=20) ) #Fit on train: pipeline.fit(X_train, y_train) ##Accuracy score on val print('Train Accuracy', pipeline.score(X_train, y_train)) print('Validation Accuracy', pipeline.score(X_val, y_val)) #Or, decrease the depth #Reduce complexity pipeline=make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), DecisionTreeClassifier(random_state=42, max_depth=17) ) #Fit on train: pipeline.fit(X_train, y_train) ##Accuracy score on val print('Train Accuracy', pipeline.score(X_train, y_train)) print('Validation Accuracy', pipeline.score(X_val, y_val)) y_pred=pipeline.predict(X_test) result=pd.Series(pipeline.predict(X_test)) result=pd.concat([test['id'], result], axis=1) result.columns=['id','status_group'] result result.to_csv(path_or_buf='kaggle.csv',index=False) type(result) result #Get and plot your feature importances. import matplotlib.pyplot as plt model=pipeline.named_steps['decisiontreeclassifier'] encoder=pipeline.named_steps['onehotencoder'] encoded_columns=encoder.transform(X_val).columns importances=pd.Series(model.feature_importances_, encoded_columns) plt.figure(figsize=(10,30)) importances.sort_values().plot.barh(color='gray') ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 from pandas_profiling import ProfileReport profile = ProfileReport(train, minimal=True).to_notebook_iframe() profile ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [x] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [x] Begin with baselines for classification.- [x] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [x] Get your validation accuracy score.- [x] Get and plot your feature importances.- [x] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [x] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` Setup Code ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' ###Output Requirement already satisfied: category_encoders==2.* in /usr/local/lib/python3.6/dist-packages (2.1.0) Requirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.5.1) Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.4.1) Requirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.10.2) Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.17.5) Requirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.22.1) Requirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.25.3) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders==2.*) (1.12.0) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders==2.*) (0.14.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2018.9) Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2.6.1) Requirement already satisfied: pandas-profiling==2.* in /usr/local/lib/python3.6/dist-packages (2.5.0) Requirement already satisfied: pandas==0.25.3 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.25.3) Requirement already satisfied: phik==0.9.9 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.9.9) Requirement already satisfied: tangled-up-in-unicode==0.0.3 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.0.3) Requirement already satisfied: htmlmin==0.1.12 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.1.12) Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (2.22.0) Requirement already satisfied: missingno==0.4.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.4.2) Requirement already satisfied: scipy>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.4.1) Requirement already satisfied: jinja2==2.11.1 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (2.11.1) Requirement already satisfied: visions==0.2.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.2.2) Requirement already satisfied: tqdm==4.42.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (4.42.0) Requirement already satisfied: astropy>=3.2.3 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (4.0) Requirement already satisfied: kaggle==1.5.6 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.5.6) Requirement already satisfied: numpy>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.17.5) Requirement already satisfied: matplotlib>=3.0.3 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.1.3) Requirement already satisfied: confuse==1.0.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.0.0) Requirement already satisfied: ipywidgets==7.5.1 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (7.5.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas==0.25.3->pandas-profiling==2.*) (2018.9) Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas==0.25.3->pandas-profiling==2.*) (2.6.1) Requirement already satisfied: nbconvert>=5.3.1 in /usr/local/lib/python3.6/dist-packages (from phik==0.9.9->pandas-profiling==2.*) (5.6.1) Requirement already satisfied: jupyter-client>=5.2.3 in /usr/local/lib/python3.6/dist-packages (from phik==0.9.9->pandas-profiling==2.*) (5.3.4) Requirement already satisfied: pytest-pylint>=0.13.0 in /usr/local/lib/python3.6/dist-packages (from phik==0.9.9->pandas-profiling==2.*) (0.15.0) Requirement already satisfied: numba>=0.38.1 in /usr/local/lib/python3.6/dist-packages (from phik==0.9.9->pandas-profiling==2.*) (0.47.0) Requirement already satisfied: pytest>=4.0.2 in /usr/local/lib/python3.6/dist-packages (from phik==0.9.9->pandas-profiling==2.*) (5.3.5) Requirement already satisfied: joblib>=0.14.1 in /usr/local/lib/python3.6/dist-packages (from phik==0.9.9->pandas-profiling==2.*) (0.14.1) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests==2.22.0->pandas-profiling==2.*) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests==2.22.0->pandas-profiling==2.*) (2019.11.28) Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests==2.22.0->pandas-profiling==2.*) (2.8) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests==2.22.0->pandas-profiling==2.*) (1.24.3) Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from missingno==0.4.2->pandas-profiling==2.*) (0.10.0) Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2==2.11.1->pandas-profiling==2.*) (1.1.1) Requirement already satisfied: networkx in /usr/local/lib/python3.6/dist-packages (from visions==0.2.2->pandas-profiling==2.*) (2.4) Requirement already satisfied: attr in /usr/local/lib/python3.6/dist-packages (from visions==0.2.2->pandas-profiling==2.*) (0.3.1) Requirement already satisfied: python-slugify in /usr/local/lib/python3.6/dist-packages (from kaggle==1.5.6->pandas-profiling==2.*) (4.0.0) Requirement already satisfied: six>=1.10 in /usr/local/lib/python3.6/dist-packages (from kaggle==1.5.6->pandas-profiling==2.*) (1.12.0) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0.3->pandas-profiling==2.*) (2.4.6) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0.3->pandas-profiling==2.*) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0.3->pandas-profiling==2.*) (1.1.0) Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from confuse==1.0.0->pandas-profiling==2.*) (3.13) Requirement already satisfied: nbformat>=4.2.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets==7.5.1->pandas-profiling==2.*) (5.0.4) Requirement already satisfied: traitlets>=4.3.1 in /usr/local/lib/python3.6/dist-packages (from ipywidgets==7.5.1->pandas-profiling==2.*) (4.3.3) Requirement already satisfied: ipython>=4.0.0; python_version >= "3.3" in /usr/local/lib/python3.6/dist-packages (from ipywidgets==7.5.1->pandas-profiling==2.*) (5.5.0) Requirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets==7.5.1->pandas-profiling==2.*) (3.5.1) Requirement already satisfied: ipykernel>=4.5.1 in /usr/local/lib/python3.6/dist-packages (from ipywidgets==7.5.1->pandas-profiling==2.*) (4.6.1) Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (0.4.4) Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (1.4.2) Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (4.6.2) Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (2.1.3) Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (3.1.0) Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (0.6.0) Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (0.8.4) Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (0.3) Requirement already satisfied: tornado>=4.1 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik==0.9.9->pandas-profiling==2.*) (4.5.3) Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik==0.9.9->pandas-profiling==2.*) (17.0.0) Requirement already satisfied: pylint>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest-pylint>=0.13.0->phik==0.9.9->pandas-profiling==2.*) (2.4.4) Requirement already satisfied: llvmlite>=0.31.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik==0.9.9->pandas-profiling==2.*) (0.31.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik==0.9.9->pandas-profiling==2.*) (45.1.0) Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik==0.9.9->pandas-profiling==2.*) (1.5.0) Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik==0.9.9->pandas-profiling==2.*) (1.8.1) Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik==0.9.9->pandas-profiling==2.*) (0.1.8) Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik==0.9.9->pandas-profiling==2.*) (20.1) Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik==0.9.9->pandas-profiling==2.*) (19.3.0) Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik==0.9.9->pandas-profiling==2.*) (8.2.0) Requirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik==0.9.9->pandas-profiling==2.*) (0.13.1) Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx->visions==0.2.2->pandas-profiling==2.*) (4.4.1) Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.6/dist-packages (from python-slugify->kaggle==1.5.6->pandas-profiling==2.*) (1.3) Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.2.0->ipywidgets==7.5.1->pandas-profiling==2.*) (0.2.0) Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.2.0->ipywidgets==7.5.1->pandas-profiling==2.*) (2.6.0) Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets==7.5.1->pandas-profiling==2.*) (0.7.5) Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets==7.5.1->pandas-profiling==2.*) (1.0.18) Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets==7.5.1->pandas-profiling==2.*) (0.8.1) Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets==7.5.1->pandas-profiling==2.*) (4.8.0) Requirement already satisfied: notebook>=4.4.1 in /usr/local/lib/python3.6/dist-packages (from widgetsnbextension~=3.5.0->ipywidgets==7.5.1->pandas-profiling==2.*) (5.2.2) Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.3.1->phik==0.9.9->pandas-profiling==2.*) (0.5.1) Requirement already satisfied: isort<5,>=4.2.5 in /usr/local/lib/python3.6/dist-packages (from pylint>=2.0.0->pytest-pylint>=0.13.0->phik==0.9.9->pandas-profiling==2.*) (4.3.21) Requirement already satisfied: mccabe<0.7,>=0.6 in /usr/local/lib/python3.6/dist-packages (from pylint>=2.0.0->pytest-pylint>=0.13.0->phik==0.9.9->pandas-profiling==2.*) (0.6.1) Requirement already satisfied: astroid<2.4,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from pylint>=2.0.0->pytest-pylint>=0.13.0->phik==0.9.9->pandas-profiling==2.*) (2.3.3) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest>=4.0.2->phik==0.9.9->pandas-profiling==2.*) (2.2.0) Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython>=4.0.0; python_version >= "3.3"->ipywidgets==7.5.1->pandas-profiling==2.*) (0.6.0) Requirement already satisfied: terminado>=0.3.3; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets==7.5.1->pandas-profiling==2.*) (0.8.3) Requirement already satisfied: wrapt==1.11.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=2.0.0->pytest-pylint>=0.13.0->phik==0.9.9->pandas-profiling==2.*) (1.11.2) Requirement already satisfied: lazy-object-proxy==1.4.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=2.0.0->pytest-pylint>=0.13.0->phik==0.9.9->pandas-profiling==2.*) (1.4.3) Requirement already satisfied: typed-ast<1.5,>=1.4.0; implementation_name == "cpython" and python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=2.0.0->pytest-pylint>=0.13.0->phik==0.9.9->pandas-profiling==2.*) (1.4.1) ###Markdown Train / Test Split ###Code import pandas as pd train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape train.head() ###Output _____no_output_____ ###Markdown Feature Selection ###Code from sklearn.tree import DecisionTreeClassifier import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline target = 'status_group' # Determining: Is it functional? # Get a dataframe with all train columns except the target & id train_features = train.drop(columns = [target, 'id']) # Get a list of the numeric features numeric_features = train_features.select_dtypes(include = 'number').columns.tolist() # Get a series with the cardinality of the nonnumeric features cardinality = train_features.select_dtypes(exclude = 'number').nunique() # Get a list of all categorical features with cardinality <= 50 categorical_features = cardinality[cardinality <= 50].index.tolist() # Combine the lists features = numeric_features + categorical_features X_train = train[features] y_train = train[target] ###Output _____no_output_____ ###Markdown Make Pipeline and Fit Decision Tree ###Code pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names = True), SimpleImputer(strategy = 'mean'), DecisionTreeClassifier(random_state = 42) ) pipeline.fit(X_train, y_train); ###Output _____no_output_____ ###Markdown Baseline Accuracy Score ###Code train_scored = pipeline.score(X_train, y_train) print(f'Train Accuracy Score: {train_scored}') ###Output Train Accuracy Score: 0.9952020202020202 ###Markdown Get and Plot Feature Importances ###Code import numpy as np import matplotlib.pyplot as plt model = pipeline.named_steps.decisiontreeclassifier importances = model.feature_importances_ encoder = pipeline.named_steps.onehotencoder encoded_columns = encoder.transform(X_train).columns importances = pd.Series(model.feature_importances_, encoded_columns) plt.figure(figsize = (10,30)) plt.title('Feature Importances') importances.sort_values().plot.barh(); ###Output _____no_output_____ ###Markdown Profiling ###Code # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 from pandas_profiling import ProfileReport profile = ProfileReport(train, minimal=True).to_notebook_iframe() profile ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [x] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [x] Begin with baselines for classification.- [x] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [x] Get your validation accuracy score.- [x] Get and plot your feature importances.- [x] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [x] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [x] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 # from pandas_profiling import ProfileReport # profile = ProfileReport(train, minimal=True).to_notebook_iframe() # profile ###Output _____no_output_____ ###Markdown Do train/validate/test split with the Tanzania Waterpumps data. ###Code # Split train into train and val from sklearn.model_selection import train_test_split train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group'], random_state=42) ###Output _____no_output_____ ###Markdown Begin with baseline for classification ###Code train['status_group'].value_counts(normalize=True) ###Output _____no_output_____ ###Markdown Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier. ###Code top10 = train['funder'].value_counts()[:15] top10 top10 = train['installer'].value_counts()[:10] top10 top10 = train['lga'].value_counts()[:30] top10 top10 = train['construction_year'].value_counts()[:17] top10 train['construction_year'].value_counts() import numpy as np def wrangle(X): """Wrangle, train, validate, and test sets in the same way""" X = X.copy() # Fix the latitude & longitudes X['latitude'] = X['latitude'].replace(-2e-08,0) cols_with_zeros = ['longitude','latitude'] for col in cols_with_zeros: X[col] = X[col].replace(0,np.nan) # Drop duplicate quantity column # Drop num_private because of too many zeros X = X.drop(columns=['quantity_group','num_private']) # Change date_recorded to datetime X['date_recorded'] = pd.to_datetime(X['date_recorded']) # Reduce cardinality: # by replacing the non-top founders, installers w/ "other" top15 = X['funder'].value_counts()[:15].index X.loc[~X['funder'].isin(top15), 'funder'] = 'other' top10 = X['installer'].value_counts()[:10].index X.loc[~X['installer'].isin(top10), 'installer'] = 'other' top30 = X['lga'].value_counts()[:30].index X.loc[~X['lga'].isin(top30), 'lga'] = 'other' # Return the wrangled dataframe return X # Wrangle train, val, and test train = wrangle(train) val = wrangle(val) test = wrangle(test) # Select features target = 'status_group' # Drop target & id from train columns train_features = train.drop(columns=[target, 'id']) # Get a list of numeric features numeric_features = train_features.select_dtypes(include='number').columns.tolist() # Get a series w/ the cardinality of nonnumeric features cardinality = train_features.select_dtypes(exclude='number').nunique() # Get a list of features w/ cardinality <= 50 categorical_features = cardinality[cardinality <= 50].index.tolist() # Combine the lists features = numeric_features + categorical_features print(features) # Arrange data into X features matrix and y target vector X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] # Use a scikit-learn pipeline to encode categoricals, impute missing values, # and fit a decision tree classifier import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from sklearn.tree import DecisionTreeClassifier pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), DecisionTreeClassifier(random_state=42) ) # Fit on train pipeline.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Get your validation accuracy score. ###Code # Score on val print('Validation Accuracy', pipeline.score(X_val, y_val)) # Score on train print('Train Accuracy', pipeline.score(X_train, y_train)) # DEFINITELY overfit ###Output Validation Accuracy 0.7598484848484849 Train Accuracy 0.9968855218855219 ###Markdown Reduce the complexity of the decision tree ###Code pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), DecisionTreeClassifier(min_samples_leaf=20, random_state=42) ) pipeline.fit(X_train, y_train) print('Train Accuracy', pipeline.score(X_train, y_train)) print('Validation Accuracy', pipeline.score(X_val, y_val)) ###Output Train Accuracy 0.808270202020202 Validation Accuracy 0.7760942760942761 ###Markdown Get and plot your feature importances. ###Code import matplotlib.pyplot as plt %matplotlib inline model = pipeline.named_steps['decisiontreeclassifier'] encoder = pipeline.named_steps['onehotencoder'] encoded_columns = encoder.transform(X_val).columns importances = pd.Series(model.feature_importances_, encoded_columns) plt.figure(figsize=(10,30)) importances.sort_values().plot.barh(); ###Output _____no_output_____ ###Markdown Make a function that selects features, pipes, encodes, imputes, fits a decision tree, and returns pipeline and train/val scores ###Code # Define a function that selects features, pipes, encodes, imputes, and # fits a decision tree # Returns the pipeline and features def full_pipe(train,val,test): # Features: # Select features target = 'status_group' # Drop target & id from train columns train_features = train.drop(columns=[target, 'id']) # Get a list of numeric features numeric_features = train_features.select_dtypes(include='number').columns.tolist() # Get a series w/ the cardinality of nonnumeric features cardinality = train_features.select_dtypes(exclude='number').nunique() # Get a list of features w/ cardinality <= 50 categorical_features = cardinality[cardinality <= 50].index.tolist() # Combine the lists features = numeric_features + categorical_features # Arrange data into X features matrix and y target vector X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] # Pipeline pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), DecisionTreeClassifier(min_samples_leaf=20, random_state=42) ) pipeline.fit(X_train, y_train) print('Train Accuracy', pipeline.score(X_train, y_train)) print('Validation Accuracy', pipeline.score(X_val, y_val)) return pipeline, features pipeline, features = full_pipe(train,val,test) ###Output Train Accuracy 0.808270202020202 Validation Accuracy 0.7760942760942761 ###Markdown Define function to plot feature importance ###Code def plot_features(pipeline,val,features): %matplotlib inline X_val = val[features] model = pipeline.named_steps['decisiontreeclassifier'] encoder = pipeline.named_steps['onehotencoder'] encoded_columns = encoder.transform(X_val).columns importances = pd.Series(model.feature_importances_, encoded_columns) plt.figure(figsize=(10,30)) importances.sort_values().plot.barh(); plot_features(pipeline,val,features) ###Output _____no_output_____ ###Markdown Continue to improve model through feature selection/engineering by building on the wrangle function ###Code # profile = ProfileReport(train, minimal=True).to_notebook_iframe() # profile train['construction_year'].dtypes train['construction_year'].describe() y_pred = pipeline.predict(X_test) # Makes a dataframe with two columns, id and status_group, # and writes to a csv file, without the index sample_submission = pd.read_csv('https://github.com/cjakuc/DS-Unit-2-Kaggle-Challenge/raw/master/module1-decision-trees/sample_submission.csv') submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('cjakuc_tanzania_submission.csv', index=False) ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select the features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 from pandas_profiling import ProfileReport profile = ProfileReport(train, minimal=True).to_notebook_iframe() profile ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' # !pip install category_encoders==2.* # !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import numpy as np import pandas as pd import category_encoders as ce from sklearn.model_selection import train_test_split from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.tree import DecisionTreeClassifier import matplotlib.pyplot as plt train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train, validate = train_test_split(train, random_state=42) train.shape, validate.shape, test.shape # Pandas Profiling can be very slow with medium & large datasets. # These parameters will make it faster. # https://github.com/pandas-profiling/pandas-profiling/issues/222 # from pandas_profiling import ProfileReport # profile_report = ProfileReport(train, minimal=True) # profile_report # ??? BASELINE ??? # TARGET SUMMARY train['status_group'].value_counts(normalize=True) ###Output _____no_output_____ ###Markdown DATA WRANGLING ###Code def data_wrangling(df): df = df.copy() df = df.drop(columns='quantity_group') df['latitude'] = df['latitude'].replace(-2e-08, 0) for col in ['longitude', 'latitude']: df[col] = df[col].replace(0, np.nan) return df train = data_wrangling(train) validate = data_wrangling(validate) test = data_wrangling(test) ###Output _____no_output_____ ###Markdown FEATURE SELECTION ###Code target = 'status_group' train_features = train.drop(columns=[target, 'id']) # Get a list of the numeric features numeric_features = train_features.select_dtypes(include='number').columns.tolist() # Get a series with the cardinality of the nonnumeric features cardinality = train_features.select_dtypes(exclude='number').nunique() # Get a list of all categorical features with cardinality <= 50 categorical_features = cardinality[cardinality <= 50].index.tolist() # Combine the lists features = numeric_features + categorical_features X_train = train[features] y_train = train[target] X_validate = validate[features] y_validate = validate[target] X_test = test[features] ###Output _____no_output_____ ###Markdown MODEL ###Code pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), DecisionTreeClassifier(max_depth=15, random_state=42) ) pipeline.fit(X_train, y_train) print(f'Train accuracy: {pipeline.score(X_train, y_train):.5f}') print(f'Validation accuracy: {pipeline.score(X_validate, y_validate):.5f}') ###Output Train accuracy: 0.83517 Validation accuracy: 0.76471 ###Markdown FEATURE IMPORTANCE ###Code model = pipeline.named_steps['decisiontreeclassifier'] encoder = pipeline.named_steps['onehotencoder'] encoded_columns = encoder.transform(X_validate).columns feature_imp = pd.Series(model.feature_importances_, encoded_columns) plt.figure(figsize=(10,30)) # feature_imp.sort_values().plot.barh() feature_imp.sort_values().tail(15) ###Output _____no_output_____ ###Markdown GENERATE CSV ###Code def kaggle_csv(filename, pipeline, test_data): y_pred = pipeline.predict(test_data) sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv(f'{filename}.csv', index=False) if 'google.colab' in sys.modules: from google.colab import files files.download(f'{filename}.csv') kaggle_csv('221-3', pipeline, X_test) ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 from pandas_profiling import ProfileReport profile = ProfileReport(train, minimal=True).to_notebook_iframe() profile ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [x] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [x] Begin with baselines for classification.- [x] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [x] Get your validation accuracy score.- [x] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape ###Output _____no_output_____ ###Markdown Test train split ###Code train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group'], random_state=42) train.shape, val.shape, test.shape # The status_group column is the target target = 'status_group' # Get a dataframe with all train columns except the target & id train_features = train.drop(columns=[target, 'id']) # Get a list of the numeric features numeric_features = train_features.select_dtypes(include='number').columns.tolist() # Get a series with the cardinality of the nonnumeric features cardinality = train_features.select_dtypes(exclude='number').nunique() # Get a list of all categorical features with cardinality <= 50 categorical_features = cardinality[cardinality <= 50].index.tolist() # Combine the lists features = numeric_features + categorical_features print(features) # Arrange data into X features matrix and y target vector X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] X_train.shape , y_train.shape,X_val.shape, y_val.shape, X_test.shape features ###Output _____no_output_____ ###Markdown Begin with baseline ###Code train['status_group'].value_counts(normalize=True) ###Output _____no_output_____ ###Markdown Make pipeline ###Code from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline import category_encoders as ce pipeline= make_pipeline( ce.OneHotEncoder(), LogisticRegression() ) pipeline.fit(X_train, y_train) train_acc = pipeline.score(X_train, y_train) val_acc = pipeline.score(X_val, y_val) #test_acc = pipeline.score(X_test, y_test) print("Train acc: ",train_acc, "Val acc: ", val_acc) from sklearn.tree import DecisionTreeClassifier dtc_model= make_pipeline( ce.OneHotEncoder(), DecisionTreeClassifier() ) #dtc_model = DecisionTreeClassifier(max_depth=1) dtc_model.fit(X_train, y_train) dtc_train_acc = dtc_model.score(X_train, y_train) dtc_val_acc = dtc_model.score(X_val, y_val) ###Output _____no_output_____ ###Markdown Val accuracy ###Code print("Train Acc: ",dtc_train_acc, "Val Acc" ,dtc_val_acc) import graphviz from sklearn.tree import export_graphviz model = dtc_model.named_steps['decisiontreeclassifier'] encoder = dtc_model.named_steps['onehotencoder'] encoded_columns = encoder.transform(X_val).columns dot_data = export_graphviz(model, out_file=None, max_depth=3, feature_names=encoded_columns, class_names=model.classes_, impurity=False, filled=True, proportion=True, rounded=True) display(graphviz.Source(dot_data)) ###Output _____no_output_____ ###Markdown Feature Importance ###Code import matplotlib.pyplot as plt encoder = dtc_model.named_steps['onehotencoder'] encoded_columns = encoder.transform(X_val).columns importances = pd.Series(model.feature_importances_, encoded_columns) plt.figure(figsize=(10,30)) importances.sort_values().plot.barh() y_pred = dtc_model.predict(X_test) DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('Cortez-Ethridge.csv', index=False) ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [x] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [x] Begin with baselines for classification.- [x] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [x] Get your validation accuracy score.- [x] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape import numpy as np def wrangle(df): df = df.copy() df['longitude'].replace({0: np.NaN}, inplace=True) df['latitude'].replace({0: np.NaN}, inplace=True) df['construction_year'].replace({0: np.NaN}, inplace=True) # may remove this, but will try for now df['population'].replace({0: np.NaN, 1: np.NaN}, inplace=True) df['installer'].replace({0: np.NaN}, inplace=True) # Reduce cardinality for installer top15 = df['installer'].value_counts()[:15].index # At locations where the neighborhood is NOT in the top 10, # replace the neighborhood with 'OTHER' df.loc[~df['installer'].isin(top15), 'installer'] = 'OTHER' # test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER' # do I need to find a way to use top10 for all? # may need to add wrangle to the pipeline somehow return df # these columns have alot of zeros too # population construction_year num_private # Do train/validate/test split with the Tanzania Waterpumps data. train, val = train_test_split(train, random_state=42) train.shape, val.shape train = wrangle(train) val = wrangle(val) train.shape, val.shape # starting with mode baseline, what is the mode of target train['status_group'].value_counts() # mode is functional, so will get baseline with that from sklearn.metrics import accuracy_score guess = ['functional'] * len(train) accuracy_score(guess, train['status_group']) ###Output _____no_output_____ ###Markdown Our baseline is 54% accuaracy. ###Code train.describe() train.describe(exclude='number').T.sort_values(by='unique') target = 'status_group' train_features = train.drop([target, 'id'], axis=1) numeric_features = train_features.select_dtypes(include='number').columns.tolist() cardinality = train_features.select_dtypes(exclude='number').nunique() categorical_features = cardinality[cardinality <= 50].index.tolist() features = numeric_features + categorical_features len(features) X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_train.shape, X_val.shape, y_train.shape, y_val.shape ###Output _____no_output_____ ###Markdown Model Package Imports ###Code #infrastructure from sklearn.pipeline import Pipeline #models from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC #ensembles from sklearn.ensemble import VotingClassifier from sklearn.ensemble import RandomForestClassifier #preprocessing # from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer from category_encoders.one_hot import OneHotEncoder ###Output _____no_output_____ ###Markdown Forest And Tree Classifer Hyperparameter notes:1st (submited):- min_samples_leaf = 15- no PCA`val score: ~82%``test score: 77.113%`2nd (submitted):- add PCA (all components) to 1st`val score: ~83.6%``test score: 75.121%`3rd:- add scaler to 1st`val score: 81.89%``test score: 77.030%`4th:- forest classifier - min_samples_leaf = 2`val score: 88.81%` `test score: 79.7%`5th:- same as fourth but re-fit on val and train`test score: 80.05%` ###Code tree_pipeline = Pipeline([ ('encoder', OneHotEncoder(use_cat_names=True)), ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler()), ('model',DecisionTreeClassifier(min_samples_leaf=17)) ]) forest_pipeline = Pipeline([ ('encoder', OneHotEncoder(use_cat_names=True)), ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler()), ('model',RandomForestClassifier(min_samples_leaf=2)) ]) from sklearn.model_selection import GridSearchCV # Parameters of pipelines can be set using ‘__’ separated parameter names: param_grid = { 'model__min_samples_leaf': [None, 1, 2] } search = GridSearchCV(forest_pipeline, param_grid, n_jobs=-1) search.fit(X_train, y_train) print("Best parameter (CV score=%0.3f):" % search.best_score_) print(search.best_params_) tree_pipeline.fit(X_train, y_train) forest_pipeline.fit(X_train, y_train) print(tree_pipeline.score(X_val, y_val)) print(forest_pipeline.score(X_val, y_val)) # can't really run this after running PCA (loose interpretability) import matplotlib.pyplot as plt encoder = forest_pipeline['encoder'] encoded_columns = encoder.transform(X_val).columns importances = pd.Series(forest_pipeline['model'].feature_importances_, encoded_columns) # plt.figure(10, 30) importances.sort_values()[-10:].plot.barh(color='grey'); # going to re-fit model with ALL data X_train = pd.concat([X_train, X_val]) y_train = pd.concat([y_train, y_val]) forest_pipeline.fit(X_train, y_train) X_test = test[features] y_preds = forest_pipeline.predict(X_test) submission = test[['id']] submission['status_group'] = pd.Series(y_preds) submission.head() submission.to_csv('submission5.csv', index=False) ###Output _____no_output_____ ###Markdown Voting Classifier ###Code data_pipeline = Pipeline([ ('encoder', OneHotEncoder(use_cat_names=True)), ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler()) ]) X_train_processed = data_pipeline.fit_transform(X_train) X_val_processed = data_pipeline.transform(X_val) log_clf = LogisticRegression(max_iter=1000) rnd_clf = RandomForestClassifier(min_samples_leaf=2) svm_clf = SVC(probability=True) voting_clf = VotingClassifier( estimators = [ ('lr', log_clf), ('rf', rnd_clf), ('scv', svm_clf) ], voting="hard" ) voting_clf.fit(X_train_processed, y_train) ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Pandas Profiling can be very slow with medium & large datasets. # These parameters will make it faster. # https://github.com/pandas-profiling/pandas-profiling/issues/222 from pandas_profiling import ProfileReport profile_report = train.profile_report( check_correlation_pearson=False, correlations={ 'pearson': False, 'spearman': False, 'kendall': False, 'phi_k': False, 'cramers': False, 'recoded': False, }, plot={'histogram': {'bayesian_blocks_bins': False}}, ) profile_report # Do train/validate/test split with the Tanzania Waterpumps data. train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group']) train.shape, val.shape, test.shape import numpy as np def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # About 3% of the time, latitude has small values near zero, # outside Tanzania, so we'll treat these values like zero. X['latitude'] = X['latitude'].replace(-2e-08, 0) # When columns have zeros and shouldn't, they are like null values. # So we will replace the zeros with nulls, and impute missing values later. cols_with_zeros = ['longitude', 'latitude'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) # quantity & quantity_group are duplicates, so drop one X = X.drop(columns='quantity_group') # return the wrangled dataframe return X train = wrangle(train) val = wrangle(val) test = wrangle(test) train['status_group'].value_counts(normalize=True) target= 'status_group' train_features= train.drop(columns=[target, 'id']) numeric_features=train_features.select_dtypes(include='number').columns.tolist() cardinality = train_features.select_dtypes(exclude='number').nunique() categorical_features=cardinality[cardinality <=50].index.tolist() features= numeric_features + categorical_features print(features) X_train=train[features] y_train=train[target] X_val=val[features] y_val=val[target] X_test=test[features] import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.tree import DecisionTreeClassifier from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler pipeline=make_pipeline(ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), StandardScaler(), DecisionTreeClassifier(min_samples_leaf=20, max_depth=None)) pipeline.fit(X_train, y_train) print('Validation Accuracy: ', pipeline.score(X_val, y_val)) y_pred=pipeline.predict(X_test) # feature importances model = pipeline.named_steps['decisiontreeclassifier'] # model.feature_importances_ #linear models have coeff, but trees have 'feat imports' encoder= pipeline.named_steps['onehotencoder'] encoded_cols = encoder.transform(X_val).columns importances = pd.Series(model.feature_importances_, encoded_cols) import matplotlib.pyplot as plt plt.figure(figsize=(10,30)) importances.sort_values().plot.barh(color='grey'); ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [x] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [x] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [x] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Pandas Profiling can be very slow with medium & large datasets. # These parameters will make it faster. # https://github.com/pandas-profiling/pandas-profiling/issues/222 import pandas_profiling profile_report = train.profile_report( check_correlation_pearson=False, correlations={ 'pearson': False, 'spearman': False, 'kendall': False, 'phi_k': False, 'cramers': False, 'recoded': False, }, plot={'histogram': {'bayesian_blocks_bins': False}}, ) profile_report ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 from pandas_profiling import ProfileReport profile = ProfileReport(train, minimal=True).to_notebook_iframe() profile ###Output variables: 100%|██████████| 41/41 [00:24<00:00, 1.68it/s] table: 100%|██████████| 1/1 [00:01<00:00, 1.48s/it] warnings [correlations]: 100%|██████████| 3/3 [00:00<00:00, 601.36it/s] package: 100%|██████████| 1/1 [00:00<00:00, 77.12it/s] build report structure: 100%|██████████| 1/1 [00:03<00:00, 3.92s/it] ###Markdown Get Baseline ###Code import numpy as np import pandas as pd from sklearn.model_selection import train_test_split # Split train into train & val train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group'], random_state=42) def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # About 3% of the time, latitude has small values near zero, # outside Tanzania, so we'll treat these values like zero. X['latitude'] = X['latitude'].replace(-2e-08, 0) # When columns have zeros and shouldn't, they are like null values. # So we will replace the zeros with nulls, and impute missing values later. # Also create a "missing indicator" column, because the fact that # values are missing may be a predictive signal. cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) X[col+'_MISSING'] = X[col].isnull() # Drop duplicate columns duplicates = ['quantity_group', 'payment_type', 'waterpoint_type_group'] X = X.drop(columns=duplicates) # Drop recorded_by (never varies) and id (always varies, random) unusable_variance = ['recorded_by', 'id'] X = X.drop(columns=unusable_variance) # Convert date_recorded to datetime X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) # Extract components from date_recorded, then drop the original column X['year_recorded'] = X['date_recorded'].dt.year X['month_recorded'] = X['date_recorded'].dt.month X['day_recorded'] = X['date_recorded'].dt.day X = X.drop(columns='date_recorded') # Engineer feature: how many years from construction_year to date_recorded X['years'] = X['year_recorded'] - X['construction_year'] X['years_MISSING'] = X['years'].isnull() # return the wrangled dataframe return X train = wrangle(train) val = wrangle(val) test = wrangle(test) # The status_group column is the target target = 'status_group' # Get a dataframe with all train columns except the target train_features = train.drop(columns=[target]) # Get a list of the numeric features numeric_features = train_features.select_dtypes(include='number').columns.tolist() # Get a series with the cardinality of the nonnumeric features cardinality = train_features.select_dtypes(exclude='number').nunique() # Get a list of all categorical features with cardinality <= 50 categorical_features = cardinality[cardinality <= 50].index.tolist() # Combine the lists features = numeric_features + categorical_features cardinality # Arrange data into X features matrix and y target vector X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] import category_encoders as ce from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(random_state=0, n_jobs=-1) ) pipeline.fit(X_train, y_train) print('Train accuracy: ', pipeline.score(X_train, y_train)) print('Validation accuracy: ', pipeline.score(X_val, y_val)) X_test = test[features] y_pred = pipeline.predict(X_test) y_pred submission = sample_submission.copy() submission['status_group'] = y_pred submission.set_index('id', inplace=True) submission.to_csv('josh-submission.csv') submission y_val.shape, y_pred.shape ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape from pandas_profiling import ProfileReport profile = ProfileReport(train, minimal=True).to_notebook_iframe() profile ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 from pandas_profiling import ProfileReport profile = ProfileReport(train, minimal=True).to_notebook_iframe() profile ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [x] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [x] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Performs a train/validate split on the training data train, val = train_test_split(train, stratify=train['status_group'], test_size=0.2, random_state=42) train.shape, val.shape # Pandas Profiling can be very slow with medium & large datasets. # These parameters will make it faster. # https://github.com/pandas-profiling/pandas-profiling/issues/222 import pandas_profiling profile_report = train.profile_report( correlations={ 'pearson': False, 'spearman': False, 'kendall': False, 'phi_k': False, 'cramers': False, 'recoded': False, }, plot={'histogram': {'bayesian_blocks_bins': False}}, ) profile_report import numpy as np def wrangle(X): '''Apply wrangling to a dataframe''' # Copy X = X.copy() # Convert near zero values to zero X['latitude'] = X['latitude'].replace(-2e-08, 0) X['date_recorded'] = pd.to_datetime(X['date_recorded']) X['month_recorded'] = X['date_recorded'].dt.month X['year_recorded'] = X['date_recorded'].dt.year # Replace zeros that should be null values with np.nan cols_with_zeros = ['longitude', 'latitude', 'population', 'construction_year'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) # Drop Duplicate columns X = X.drop(columns='quantity_group') # Return wrangled dataframe return X train = wrangle(train) val = wrangle(val) test = wrangle(test) ###Output _____no_output_____ ###Markdown Begin with baselinesBased on the baseline, if we guessed that each unit was functional, we would be correct ~54% of the time ###Code train['status_group'].value_counts(normalize=True) ###Output _____no_output_____ ###Markdown Feature selectionHere, selectKBest will be used along with a decision tree classifier to select the ideal columns ###Code # Target column target = 'status_group' # Drop id and target from dataframe train_features = train.drop(columns=[target, 'id']) # List of numeric features numeric_features = (train_features .select_dtypes(include='number') .columns .tolist() ) # Series with cardinality of object columns cardinality = train_features.select_dtypes(exclude='number').nunique() # Get a list of all features with cardinality <= 50 categorical_features = cardinality[cardinality <= 50].index.tolist() # Combine lists features = numeric_features + categorical_features print(features) # Arrange data X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] # Encode, impute, select ideal features based on accuracy score import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.tree import DecisionTreeClassifier from sklearn.feature_selection import SelectKBest, f_classif from sklearn.pipeline import make_pipeline def bestTreeK(X_trn, X_tst, y_trn, y_tst): ''' Encodes train and test, imputes missing values, fits a decision tree classifier and returns best k along with accuracy score, using a pipeline''' train_score = None test_score = None best_k = None encoder = ce.OneHotEncoder(use_cat_names=True) imputer = SimpleImputer() model = DecisionTreeClassifier(min_samples_leaf=1, max_depth=16, random_state=42) # Encode and impute train and test dataframes Xtr_enc = encoder.fit_transform(X_trn) Xtr_imp = imputer.fit_transform(Xtr_enc) Xts_enc = encoder.transform(X_tst) Xts_imp = imputer.transform(Xts_enc) # Run decision tree on X_train and iteravily arrive at parameter choice for k in range(1,Xts_imp.shape[1]+1): selector = SelectKBest(score_func=f_classif, k=k) Xtr_sel = selector.fit_transform(Xtr_imp, y_trn) Xts_sel = selector.transform(Xts_imp) # Get model score for train model.fit(Xtr_sel, y_trn) trn_score = model.score(Xtr_sel, y_trn) # Get model score for test tst_score = model.score(Xts_sel, y_tst) if k == 1: train_score = trn_score test_score = tst_score best_k = k else: if tst_score > test_score: train_score = trn_score test_score = tst_score best_k = k output = f"Best Test Accuracy is {test_score} "\ f"Best k: {k}\nTrain Accuracy with this model: {train_score}" return output print(bestTreeK(X_train, X_val, y_train, y_val)) import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.tree import DecisionTreeClassifier from sklearn.feature_selection import SelectKBest, SelectFwe from sklearn.pipeline import make_pipeline # Use pipeline to fit multiple sklearn tools pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='most_frequent'), DecisionTreeClassifier(min_samples_leaf=1, max_depth=16, random_state=42) ) pipeline.fit(X_train, y_train) print("Tr. Acc:", pipeline.score(X_train, y_train)) print("Ts. Acc:", pipeline.score(X_val, y_val)) # Plot the feature importances import matplotlib.pyplot as plt model = pipeline.named_steps['decisiontreeclassifier'] encoder = pipeline.named_steps['onehotencoder'] encoded_columns = encoder.transform(X_val).columns importances = pd.Series(model.feature_importances_, encoded_columns) plt.figure(figsize=(10,35)) importances.sort_values().plot.barh(color='grey'); # Submission csv y_pred = pipeline.predict(X_test) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('jose-marquez-ds11-wp.csv', index=False) ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Pandas Profiling can be very slow with medium & large datasets. # These parameters will make it faster. # https://github.com/pandas-profiling/pandas-profiling/issues/222 import pandas_profiling profile_report = train.profile_report( check_correlation_pearson=False, correlations={ 'pearson': False, 'spearman': False, 'kendall': False, 'phi_k': False, 'cramers': False, 'recoded': False, }, plot={'histogram': {'bayesian_blocks_bins': False}}, ) profile_report ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape train, val = train_test_split(train, test_size=0.25, stratify=train["status_group"], random_state=37) train.shape, val.shape train["status_group"].unique() train["status_group"].value_counts(normalize=True) train["functional"] = (train["status_group"] == "functional").astype(int) train.shape, val.shape train["functional"].value_counts(normalize=True) # from pandas_profiling import ProfileReport # profile = ProfileReport(train, explorative=True).to_notebook_iframe() # profile # import plotly.express as px # px.scatter(train, x="longitude", y="latitude", color="functional") train["latitude"].describe() import numpy as np def wrangle(X): # ```This is a data-wrangling function.``` X = X.copy() X["latitude"] = X["latitude"].replace(-2e-08, 0) zero_cols = ["longitude", "latitude"] for col in zero_cols: X[col] = X[col].replace(0, np.nan) # Dropping highly correlated cells, unique value and redundancy. X = X.drop(["extraction_type_group", "extraction_type_class", "management_group", "quality_group", "payment_type", "quantity_group", "source_type", "source_class", "waterpoint_type_group", "recorded_by"], axis=1) return X train = wrangle(train) val = wrangle(val) test = wrangle(test) train["latitude"].isnull().sum() # px.scatter(train, x="longitude", y="latitude", color="functional") ###Output _____no_output_____ ###Markdown **The below function is an attempt to automate feature-building. It did not go according to plan.** ###Code # def feature_building(X): # X = X.copy() # target = "status_group" # X_features = X.drop(columns=[target, "id"]) # X_num = X_features.select_dtypes(include="number").columns.tolist() # X_card = X_features.select_dtypes(exclude="number").nunique() # X_cat = X_card[X_card <= 30].index.tolist() # X_feats = X_num + X_cat # return X # feature_building(train) num_feats = ["amount_tsh", "gps_height", "longitude", "latitude", "num_private", "region_code", "district_code", "population", "construction_year"] cat_feats = ["basin", "region", "public_meeting","scheme_management", "permit", "extraction_type", "management", "payment", "water_quality", "quantity", "source", "waterpoint_type"] test_feats = num_feats + cat_feats test = test.drop(columns=["id"]) X_test = test[test_feats] target = "status_group" train_feats = train.drop(columns=[target, "id", "functional"]) val_feats = val.drop(columns=[target, "id"]) numt_feats = train_feats.select_dtypes(include="number").columns.tolist() numv_feats = val_feats.select_dtypes(include="number").columns.tolist() hit_card = train_feats.select_dtypes(exclude="number").nunique() hive_card = val_feats.select_dtypes(exclude="number").nunique() catT_feats = hit_card[hit_card <= 30].index.tolist() catV_feats = hive_card[hive_card <= 30].index.tolist() train_features = numt_feats + catT_feats val_features = numv_feats + catV_feats X_train = train[train_features] y_train = train[target] X_val = val[val_features] y_val = val[target] ###Output _____no_output_____ ###Markdown **Cells below are to ensure I know what this code is doing.** ###Code train_feats.select_dtypes(include="number").columns.tolist() train_feats.select_dtypes(exclude="number").nunique() hit_card[hit_card <= 30].index.tolist() train.shape ###Output _____no_output_____ ###Markdown **In the cells above, the data was cleaned and explored; in the cells below, the model is being fit and the submission is created.** ###Code import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import SelectKBest from sklearn.tree import DecisionTreeClassifier num_feats = ["amount_tsh", "gps_height", "longitude", "latitude", "num_private", "region_code", "district_code", "population", "construction_year"] cat_feats = ["basin", "region", "public_meeting","scheme_management", "permit", "extraction_type", "management", "payment", "water_quality", "quantity", "source", "waterpoint_type"] estimator = Pipeline([ ("encoder", ce.OneHotEncoder(use_cat_names=True, cols=cat_feats)), ("imputer", SimpleImputer(strategy="most_frequent")), ("scaler", StandardScaler()), ("dec_tree", DecisionTreeClassifier(random_state=33, max_depth=12)) ]) estimator.fit(X_train, y_train); print('Training Accuracy', estimator.score(X_train, y_train)) print('Validation Accuracy', estimator.score(X_val, y_val)) y_pred = estimator.predict(X_test) DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' whyse_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') submission = whyse_submission.copy() submission['status_group'] = y_pred submission.to_csv('whyse-submission.csv', index=False) ###Output _____no_output_____ ###Markdown Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [x] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [x] Do train/validate/test split with the Tanzania Waterpumps data.- [x] Begin with baselines for classification.- [x] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [x] Get your validation accuracy score.- [x] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'``` ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd import matplotlib.pyplot as plt import numpy as np def setup_data(): train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), #thanks pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') return train, test, sample_submission train,test, _ = setup_data() #kaggle train.shape, test.shape target = 'status_group' train.plot(x=target, kind='hist') #we do another split on our training dataframe from sklearn.model_selection import train_test_split train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group'], random_state=8) def wrangle(X): #do some stuff # Prevent SettingWithCopyWarning X = X.copy() # About 3% of the time, latitude has small values near zero, # outside Tanzania, so we'll treat these values like zero. if ('latitude' in X.index): X['latitude'] = X['latitude'].replace(-2e-08, 0) # When columns have zeros and shouldn't, they are like null values. # So we will replace the zeros with nulls, and impute missing values later. cols_with_zeros = ['longitude', 'latitude'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) if ('permit' in X.index): X['permit'] = X['permit'].astype('str') X['permit'] = X['permit'].replace({'True': 'yes','False': 'no'}) # quantity & quantity_group are duplicates, so drop one dropcols = ['wpt_name', 'ward','scheme_name', 'id'] for i in dropcols: if i in X.index: X.drop(labels=i, axis=1, inplace=True) #X['age'] = pd.DatetimeIndex(X['date_recorded']).year - X.construction_year #not good due to zeros return X ## feature selection look at numericals cols train['functional'] = (train['status_group']=='functional').astype(int) train.corr().reset_index().sort_values('functional').plot(x='index', y='functional', figsize=(10,5), kind ='barh', title='correlation between numerical features & target') cor = train.copy().corr().reset_index().sort_values(by='functional') cor.functional = cor.functional.apply(abs) ncols =cor.sort_values('functional',ascending=False)['index'][1:7].values.tolist() #ordered by absolute correlation print(ncols) cardinality = train_features.select_dtypes(exclude='number').nunique() # Get a list of all categorical features with cardinality <= 50 categorical_features = cardinality[cardinality <= 50].index.tolist() print(categorical_features) def arrange(df,features, target): #for validation / selection only train, val = train_test_split(df, train_size=0.80, test_size=0.20, stratify=df[target], random_state=8) train = wrangle(train) #dataframes val = wrangle(val) #new dataframmes X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] return X_train, y_train, X_val, y_val def reduce_features(x): return (x[ft19]) #return the subset of encoded features we selected #pipeline from sklearn.preprocessing import FunctionTransformer import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.tree import DecisionTreeClassifier from sklearn.pipeline import make_pipeline #from sklearn.preprocessing import StandardScaler scaling not usd for treees #reduce complexity of DT to bias model away from training data to validation. .. ie. reduce variance. #change / increase 'min node size aka min samples per leaf' pipeline_red = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='median'), DecisionTreeClassifier(min_samples_leaf=20, random_state=88) ) pipeline19 =make_pipeline( #same ce.OneHotEncoder(use_cat_names=True), FunctionTransformer(reduce_features), SimpleImputer(strategy='median'), DecisionTreeClassifier(random_state=88) ) pipeline_red19 = make_pipeline( ce.OneHotEncoder(use_cat_names=True), FunctionTransformer(reduce_features), SimpleImputer(strategy='median'), DecisionTreeClassifier(min_samples_leaf=30, random_state=88) ) #plot importances of plain model usint only categorical X_train, y_train, X_val, y_val = arrange(train, categorical_features, target) pipeline =make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='median'), DecisionTreeClassifier(random_state=88) ) pipeline.fit(X_train, y_train) model = pipeline.named_steps['decisiontreeclassifier'] encoder= pipeline.named_steps['onehotencoder'] encoded_columns = encoder.transform(X_val).columns #get the column names back model.feature_importances_ #feature importances vs coeffs importances = pd.Series(model.feature_importances_, encoded_columns) importances.sort_values().head(35).plot.barh(figsize=(10,8),title='importances of features') plt.show() top20cats =importances.sort_values(ascending=False).index[:19] print(top20cats) print('Train Accuracy', pipeline.score(X_train, y_train)) print('validation accuracy', pipeline.score(X_val, y_val)) #predict on test #y_pred = pipeline.predict(X_test) cats2 = ['quantity', 'waterpoint_type', 'payment_type','permit', 'management', 'region','basin', 'scheme_management', 'quantity_group','public_meeting','extraction_type_group'] cats3 = ['quantity', 'waterpoint_type','source_type','source', 'payment_type','permit', 'management', 'extraction_type', 'region','basin', 'scheme_management', 'quantity_group'] ft = ncols +cats2 ft train,test, _ = setup_data() #reset dataframes print(train.shape,test.shape) X_train, y_train, X_val, y_val = arrange(train, ft, target) pipeline =make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), DecisionTreeClassifier(random_state=88) ) pipeline.fit(X_train, y_train) print('Train Accuracy', pipeline.score(X_train, y_train)) print('validation accuracy', pipeline.score(X_val, y_val)) pipeline_red = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='median'), DecisionTreeClassifier(min_samples_leaf=19, random_state=88) ) pipeline_red.fit(X_train, y_train) print('Train Accuracy', pipeline_red.score(X_train, y_train)) print('validation accuracy', pipeline_red.score(X_val, y_val)) train,test, _ = setup_data() #reset dataframes features=categorical_features+ncols X_train, y_train, X_val, y_val = arrange(train, features, target) pipeline_red = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='median'), DecisionTreeClassifier(min_samples_leaf=18, random_state=88) ) pipeline_red.fit(X_train, y_train) print('Train Accuracy', pipeline_red.score(X_train, y_train)) print('validation accuracy', pipeline_red.score(X_val, y_val)) X_test = wrangle(test)[features] y_pred = pipeline_red.predict(X_test) y_pred.shape sub = pd.DataFrame(X_test,y_pre test['status_group']= y_pred sub= test[['id','status_group']] sub.set_index('id',inplace=True) sub.to_csv('ksub.csv') train,test, _ = setup_data() #reset dataframes cat4= ['quantity_group', 'waterpoint_type_group', 'permit', 'management','region', 'basin','source_type','gps_height', 'region_code', 'extraction_type_group'] X_train, y_train, X_val, y_val = arrange(train, cat4, target) pipeline_red = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='median'), DecisionTreeClassifier(min_samples_leaf=18 , random_state=88) ) pipeline_red.fit(X_train, y_train) print('Train Accuracy', pipeline_red.score(X_train, y_train)) print('validation accuracy', pipeline_red.score(X_val, y_val)) ###Output arrange ['quantity_group', 'waterpoint_type_group', 'permit', 'management', 'region', 'basin', 'source_type', 'gps_height', 'region_code', 'extraction_type_group'] target status_group Train Accuracy 0.7746632996632996 validation accuracy 0.7500841750841751
draft_notebooks/ablation_study.ipynb
###Markdown Ablation Study ###Code # initializing params columns = ['Ablation Category', 'Accuracy', 'Precision', 'Recall', 'F1', 'ROC AUC'] # initializing ablation categories baseline = [] demographics = ['gender'] focal_page_qty_co_prod = ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size'] focal_page_nature_co_prod = ['content_token_count', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m', 'content_token_vs_token'] contribution_relevance = ['contribution_similarity'] focal_page_quality_co_prod = ['avg_persistence'] focal_page_coordination_qty = ['page_talk_edits'] activity_in_wiki_community = ['tenure', 'total_edited_pages', 'ns1_edit_dist', 'ns2_edit_dist', 'ns3_edit_dist', 'ns4_edit_dist', 'ns5_edit_dist', 'ns6_edit_dist', 'ns7_edit_dist', 'ns8_edit_dist', 'ns9_edit_dist', 'ns10_edit_dist', 'ns11_edit_dist', 'ns12_edit_dist', 'ns13_edit_dist', 'ns14_edit_dist', 'ns15_edit_dist'] topical_concetration = ['links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity'] overall_wiki_activity = ['page_edit_dist', 'ns0_edit_dist'] ablation_categories = {'*** All features ***': baseline, 'Demographics': demographics, 'Quantity of co-production activity within focal page': focal_page_qty_co_prod, 'Nature of co-production activity within focal page': focal_page_nature_co_prod, 'Relevance of one\'s contributions to the page\'s contents': contribution_relevance, 'Quality of co-production activity within focal page': focal_page_quality_co_prod, 'Quantity of coordination activity related to focal page': focal_page_coordination_qty, 'Nature of activity in Wikipedia community': activity_in_wiki_community, 'Topical concentration in co-production activity': topical_concetration, 'Overall activity in co-production Wikipedia community': overall_wiki_activity} # loading data (training set) df = pd.read_csv('data/new_train_data.csv', header=0) print('Total experts: {}'.format(len(df[df.label == 1]))) print('Total non-experts: {}'.format(len(df[df.label == 0]))) df.drop(['edit_type_exists'], axis=1, inplace=True) edit_types = [col for col in df.columns if str(col).startswith('edit_type')] print(edit_types) for edit_type in edit_types: df[edit_type].fillna(value=-1, inplace=True) model = XGBClassifier() n_estimators = [100, 120, 140, 160, 180, 200] learning_rate = [0.0001, 0.001, 0.01, 0.1, 0.2, 0.3] param_grid = dict(learning_rate=learning_rate, n_estimators=n_estimators) X = df.drop(drop_columns_gm([]), axis=1) y = df.label kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=7) grid_search = GridSearchCV(model, param_grid, scoring="neg_log_loss", n_jobs=-1, cv=kfold) grid_result = grid_search.fit(X, y) # summarize results print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) # plot results scores = np.array(means).reshape(len(learning_rate), len(n_estimators)) for i, value in enumerate(learning_rate): plt.plot(n_estimators, scores[i], label='learning_rate: ' + str(value)) plt.legend() plt.xlabel('n_estimators') plt.ylabel('Log Loss') plt.savefig('n_estimators_vs_learning_rate.png') ###Output /home/yarov/anaconda3/lib/python3.7/site-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal. DeprecationWarning) ###Markdown Generalized model ###Code gm_rows = [] for ablation_category in ablation_categories.keys(): print('Ablation Category: ' + ablation_category) X = df.drop(drop_columns_gm(ablation_categories[ablation_category]), axis=1) print('Columns: {}\n'.format(list(X.columns))) y = df.label kfold = StratifiedKFold(n_splits=10, random_state=7) model = XGBClassifier(objective='binary:logistic', seed=123, n_estimators=160) metrics = get_metrics(classifier=model, x=X, y=y, cv=kfold) gm_row = [ablation_category] for metric in metrics: gm_row.append(metric) gm_rows.append(gm_row) gm_df = pd.DataFrame(gm_rows, columns=columns) gm_df.to_csv('data/new_ablation_study_gm.csv', index=False) ###Output Ablation Category: Baseline Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'gender', 'ns0_edit_dist', 'page_edit_dist', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'avg_persistence', 'content_token_count', 'content_token_vs_token', 'contribution_similarity', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] Ablation Category: Demographics Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'ns0_edit_dist', 'page_edit_dist', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'avg_persistence', 'content_token_count', 'content_token_vs_token', 'contribution_similarity', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] Ablation Category: Quantity of co-production activity within focal page Columns: ['gender', 'ns0_edit_dist', 'page_edit_dist', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'avg_persistence', 'content_token_count', 'content_token_vs_token', 'contribution_similarity', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] Ablation Category: Nature of co-production activity within focal page Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'gender', 'ns0_edit_dist', 'page_edit_dist', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'avg_persistence', 'contribution_similarity', 'persistence_exists'] Ablation Category: Relevance of one's contributions to the page's contents Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'gender', 'ns0_edit_dist', 'page_edit_dist', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'avg_persistence', 'content_token_count', 'content_token_vs_token', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] Ablation Category: Quality of co-production activity within focal page Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'gender', 'ns0_edit_dist', 'page_edit_dist', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'content_token_count', 'content_token_vs_token', 'contribution_similarity', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] Ablation Category: Quantity of coordination activity related to focal page Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'gender', 'ns0_edit_dist', 'page_edit_dist', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'avg_persistence', 'content_token_count', 'content_token_vs_token', 'contribution_similarity', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] Ablation Category: Nature of activity in Wikipedia community Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'gender', 'ns0_edit_dist', 'page_edit_dist', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'avg_persistence', 'content_token_count', 'content_token_vs_token', 'contribution_similarity', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] Ablation Category: Topical concentration in co-production activity Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'gender', 'ns0_edit_dist', 'page_edit_dist', 'avg_persistence', 'content_token_count', 'content_token_vs_token', 'contribution_similarity', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] Ablation Category: Overall activity in co-production Wikipedia community Columns: ['page_edits', 'page_edits_ratio', 'edit_period_q1', 'edit_period_q2', 'edit_period_q3', 'edit_period_q4', 'mean_edit_interval', 'mean_edit_size', 'gender', 'links_overlap', 'categories_overlap', 'title_similarity', 'summary_similarity', 'avg_persistence', 'content_token_count', 'content_token_vs_token', 'contribution_similarity', 'persistence_exists', 'edit_type_a', 'edit_type_b', 'edit_type_c', 'edit_type_d', 'edit_type_e', 'edit_type_f', 'edit_type_g', 'edit_type_h', 'edit_type_i', 'edit_type_j', 'edit_type_k', 'edit_type_l', 'edit_type_m'] ###Markdown Full model ###Code fm_rows = [] for ablation_category in ablation_categories.keys(): print('Ablation Category: ' + ablation_category) X = df.drop(drop_columns_fm(ablation_categories[ablation_category]), axis=1) y = df.label kfold = StratifiedKFold(n_splits=10, random_state=7) model = XGBClassifier(objective='binary:logistic', seed=123, n_estimators=160) metrics = get_metrics(classifier=model, x=X, y=y, cv=kfold) fm_row = [ablation_category] for metric in metrics: fm_row.append(metric) fm_rows.append(fm_row) fm_df = pd.DataFrame(fm_rows, columns=columns) fm_df.to_csv('data/new_ablation_study_fm.csv', index=False) ###Output Ablation Category: Baseline Ablation Category: Demographics Ablation Category: Quantity of co-production activity within focal page Ablation Category: Nature of co-production activity within focal page Ablation Category: Relevance of one's contributions to the page's contents Ablation Category: Quality of co-production activity within focal page Ablation Category: Quantity of coordination activity related to focal page Ablation Category: Nature of activity in Wikipedia community Ablation Category: Topical concentration in co-production activity Ablation Category: Overall activity in co-production Wikipedia community
notebooks/Health_Analytics_Synthea_Test.ipynb
###Markdown Getting ready for Health Analytics **Playing around with getting Synthea data into Colab** ###Code import pandas as pd # Don't wrap repr(DataFrame) across additional lines pd.set_option("display.expand_frame_repr", False) # Set max rows displayed in output to 25 pd.set_option("display.max_rows", 50) # Load data from GitHub (clone first, then use) !git clone https://github.com/ProfAaronBaird/HealthAnalytics.git # Load data path_root = "HealthAnalytics/data/synthea/" path_patients_csv = 'csv_ga_1500_patients/patients.csv' path_encounters_csv = 'csv_ga_1500_patients/encounters.csv' dfp = pd.read_csv(path_root + path_patients_csv) dfe = pd.read_csv(path_root + path_encounters_csv) dfp.head dfe.head dfp.columns dfe.columns # Join the encounter data with the patient data dfj = dfe.set_index('Id').join(dfp.set_index('Id')) dfj.head # Group by patient ID and sum the claim cost dfjg = dfj.groupby('PATIENT')['TOTAL_CLAIM_COST'].sum().reset_index() dfjg.sort_values(by='TOTAL_CLAIM_COST',axis=0,ascending=False) dfjg.describe() print(dfjg) # Top 10 most expensive patients dfjg.nlargest(n=10,columns=['TOTAL_CLAIM_COST']) # Info on most expensive patient dfe.loc[dfe['PATIENT'] == 'd19ed2e8-7eda-0bef-be7f-d8c9257f4198'] # Get top 10% dfjg['decile'] = pd.qcut(dfjg['TOTAL_CLAIM_COST'], q=10, labels=False, precision=0) print(dfjg.sort_values(by='decile',axis=0,ascending=False)) dfjg.loc[dfjg['PATIENT'] == 'd19ed2e8-7eda-0bef-be7f-d8c9257f4198'] # Get ready for supervised learning... classify column for users in the top 10% of cost (utilization) dfjg['risk_highest'] = 0 dfjg.loc[dfjg.decile == 9, 'risk_highest'] = 1 dfjg['risk_rising'] = 0 dfjg.loc[dfjg.decile.isin([7, 8]), 'risk_rising'] = 1 dfjg['risk_moderate'] = 0 dfjg.loc[dfjg.decile.isin([4, 5, 6]), 'risk_moderate'] = 1 dfjg['risk_low'] = 0 dfjg.loc[dfjg.decile.isin([0, 1, 2, 3]), 'risk_low'] = 1 print(dfjg) dfp.columns dfjg.columns # Merge back with patient dataframe dfm = pd.merge(dfp, dfjg, left_index=True, right_index=True) # or left_on='Id', right_on='PATIENT' dfm.describe() dfm.head() ###Output _____no_output_____ ###Markdown New things learned: Adding groupby sum column to the main dataframe: use transform:df['Data4'] = df['Data3'].groupby(df['Date']).transform('sum')Ideas: All under the heading "Risk Stratification":- For high_util, might actually want four columns, risk_highest, risk_risk, risk_moderate, risk_low- Might also want another way to categorize risk, number of conditions and see if correlated with costs: https://www.nachc.org/wp-content/uploads/2019/03/Risk-Stratification-Action-Guide-Mar-2019.pdf **Playing around with R code in-line with Python** ###Code %load_ext rpy2.ipython %%R x <- seq(0, 2*pi, length.out=50) x ###Output [1] 0.0000000 0.1282283 0.2564565 0.3846848 0.5129131 0.6411414 0.7693696 [8] 0.8975979 1.0258262 1.1540544 1.2822827 1.4105110 1.5387393 1.6669675 [15] 1.7951958 1.9234241 2.0516523 2.1798806 2.3081089 2.4363372 2.5645654 [22] 2.6927937 2.8210220 2.9492502 3.0774785 3.2057068 3.3339351 3.4621633 [29] 3.5903916 3.7186199 3.8468481 3.9750764 4.1033047 4.2315330 4.3597612 [36] 4.4879895 4.6162178 4.7444460 4.8726743 5.0009026 5.1291309 5.2573591 [43] 5.3855874 5.5138157 5.6420439 5.7702722 5.8985005 6.0267288 6.1549570 [50] 6.2831853
_notebooks/2020-06-11-DCEP-Digital-Corpus-of-the-European-Parliament.ipynb
###Markdown DCEP, Digital Corpus of the European Parliament> Documents published on the European Parliament's official website- toc: false- badges: false- comments: false- author: Morgan McGuire- categories: [DCEP, translation, nmt, mt] Available for Download ✅⚠️ Always check the license of the data source before using the data ⚠️- Main page: [https://ec.europa.eu/jrc/en/language-technologies/dcep](https://ec.europa.eu/jrc/en/language-technologies/dcep)- Download Link: [https://wt-public.emm4u.eu/Resources/DCEP-2013/DCEP-Download-Page.html](https://wt-public.emm4u.eu/Resources/DCEP-2013/DCEP-Download-Page.html)- Extraction Instructions: [https://wt-public.emm4u.eu/Resources/DCEP-2013/DCEP-extract-README.html](https://wt-public.emm4u.eu/Resources/DCEP-2013/DCEP-extract-README.html)- Format: **Sentence-aligned data is in plain text** Brief DescriptionContains the majority of the documents published on the European Parliament's official website. It comprises a variety of document types, from press releases to session and legislative documents related to European Parliament's activities and bodies. The current version of the corpus contains documents that were produced between 2001 and 2012. Other Notes- Lines of text: 46,146- GA Word count: 1,029,348 Word Count Distribution ###Code #hide_input import seaborn as sns sns.distplot(ga_df.ga_len, kde=False) plt.title('ga word count distribution'); ###Output _____no_output_____ ###Markdown Code to Extract Files to Pandas DataFrameGA-EN specific instructions are below, for more info see the offical [extraction instructions page](https://wt-public.emm4u.eu/Resources/DCEP-2013/DCEP-extract-README.html) 1. Download and extract language files ###Code !wget -q http://optima.jrc.it/Resources/DCEP-2013/sentences/DCEP-sentence-GA-pub.tar.bz2 !wget -q http://optima.jrc.it/Resources/DCEP-2013/sentences/DCEP-sentence-EN-pub.tar.bz2 !tar jxf DCEP-sentence-GA-pub.tar.bz2 !tar jxf DCEP-sentence-EN-pub.tar.bz2 ###Output _____no_output_____ ###Markdown 2. Download and extract language pair info ###Code !wget -q http://optima.jrc.it/Resources/DCEP-2013/langpairs/DCEP-EN-GA.tar.bz2 !tar jxf DCEP-EN-GA.tar.bz2 ###Output _____no_output_____ ###Markdown 3. Download and extract alignment scripts ###Code !wget -q http://optima.jrc.it/Resources/DCEP-2013/DCEP-extract-scripts.tar.bz2 !tar jxvf DCEP-extract-scripts.tar.bz2 ###Output _____no_output_____ ###Markdown 4. Create aligned file> The `--numbering-filter` is a crude but useful heuristic that attempts to drop numberingsand short titles from the output. It works simply by matching sentences on both sidesagainst a Unicode regex that looks for two alphabetic characters with space between them.> The `--length-filter-level=LENGTH_FILTER_LEVEL` argument is used to throw away as suspiciousall bisentences where the ratio of the shorter and the longer sentence (in character length)is less than LENGTH_FILTER_LEVEL percent. ###Code !cd dcep && ./src/languagepair.py --numbering-filter --length-filter-level=40 EN-GA > EN-GA-bisentences.txt ###Output _____no_output_____ ###Markdown 5. Open as a Dataframe ###Code import pandas as pd df = pd.read_csv('dcep/EN-GA-bisentences.txt', header=None, sep='\t') df.columns = ['en', 'ga'] df.to_csv('dcep_en-ga_bisentences.csv') print(len(df)) df.head() ###Output 46147
03_hmms.ipynb
###Markdown Introduction to Hidden Markov Modelshttps://en.wikipedia.org/wiki/Hidden_Markov_modelConsider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends.Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.![hmm](https://github.com/lisaong/hss/blob/master/assets/400px-HMMGraph.svg.png?raw=1) ###Code states = ('Rainy', 'Sunny') # initial state of the HMM (tends to rain) start_probability = {'Rainy': 0.6, 'Sunny': 0.4} ###Output _____no_output_____ ###Markdown Transition Probability: $P(a_i|a_{i-1})$This is the probability of state a[i] given a[i-1]. ###Code transition_probability = { 'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3}, 'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6}, } ###Output _____no_output_____ ###Markdown Emission Probability: $P(b_i|a_i)$This is the probability of result b[i] given state a[i] ###Code emission_probability = { 'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5}, 'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1}, } ###Output _____no_output_____ ###Markdown Hidden Markov ModelGiven what Bob did in 3 days (walk, shop, clean), what was the weather during those 3 days?$P(b_0, ..., b_{n-1}|a_0, ... , a_{n-1}) = \prod P(b_i|a_i) \prod P(a_i|a_{i-1})$ Viterbi AlgorithmThis algorithm is useful in finding the subsequence of an observation that matches best (on average) to a given Hidden Markov Model.https://en.wikipedia.org/wiki/Viterbi_algorithm Applications- Finding the most likely sequence of events (3 rainy days in a row) that caused an observation (Bob stayed home)- Finding the most likely sequence of speech phonemes that resulted in a spoken phrase- Finding the most likely sequence of poses that best matches an activity- https://en.wikipedia.org/wiki/Hidden_Markov_modelApplications ###Code from pprint import pprint import numpy as np def viterbi(obs, states, start_p, trans_p, emit_p): V = [{}] for st in states: V[0][st] = {"prob": start_p[st] * emit_p[st][obs[0]], "prev": None} print('Viterbi table at step 0:') pprint(V) # Run viterbi algorithm for step t>0 for t in range(1, len(obs)): V.append({}) for st in states: # Compute the state that results in highest probability at step t tr_probs = np.array([V[t-1][prev_st]["prob"]*trans_p[prev_st][st] for prev_st in states]) max_tr_prob = tr_probs.max() prev_st_selected = states[tr_probs.argmax()] max_prob = max_tr_prob * emit_p[st][obs[t]] V[t][st] = {"prob": max_prob, "prev": prev_st_selected} print('Viterbi table at step %s:' % t) pprint(V) print('================================\nFinal outcome:') opt = [] # The highest probability at the end of the sequence max_prob = max(value["prob"] for value in V[-1].values()) # Get most probable state and its backtrack for st, data in V[-1].items(): if data["prob"] == max_prob: opt.append(st) previous = st break print(f'final state: {previous}, prob: {max_prob}') # Follow the backtrack till the first observation for t in range(len(V)-2, -1, -1): opt.insert(0, V[t+1][previous]["prev"]) previous = V[t+1][previous]["prev"] print(f'{t}: {previous} {V[t][previous]["prob"]}') print(f'The steps of states are {" ".join(opt)} with highest probability of {max_prob}') viterbi( ('walk', 'shop', 'clean'), states, start_probability, transition_probability, emission_probability) viterbi( ('walk', 'clean', 'walk', 'shop'), states, start_probability, transition_probability, emission_probability) ###Output Viterbi table at step 0: [{'Rainy': {'prev': None, 'prob': 0.06}, 'Sunny': {'prev': None, 'prob': 0.24}}] Viterbi table at step 1: [{'Rainy': {'prev': None, 'prob': 0.06}, 'Sunny': {'prev': None, 'prob': 0.24}}, {'Rainy': {'prev': 'Sunny', 'prob': 0.048}, 'Sunny': {'prev': 'Sunny', 'prob': 0.0144}}] Viterbi table at step 2: [{'Rainy': {'prev': None, 'prob': 0.06}, 'Sunny': {'prev': None, 'prob': 0.24}}, {'Rainy': {'prev': 'Sunny', 'prob': 0.048}, 'Sunny': {'prev': 'Sunny', 'prob': 0.0144}}, {'Rainy': {'prev': 'Rainy', 'prob': 0.00336}, 'Sunny': {'prev': 'Rainy', 'prob': 0.00864}}] Viterbi table at step 3: [{'Rainy': {'prev': None, 'prob': 0.06}, 'Sunny': {'prev': None, 'prob': 0.24}}, {'Rainy': {'prev': 'Sunny', 'prob': 0.048}, 'Sunny': {'prev': 'Sunny', 'prob': 0.0144}}, {'Rainy': {'prev': 'Rainy', 'prob': 0.00336}, 'Sunny': {'prev': 'Rainy', 'prob': 0.00864}}, {'Rainy': {'prev': 'Sunny', 'prob': 0.0013824000000000002}, 'Sunny': {'prev': 'Sunny', 'prob': 0.0015552}}] ================================ Final outcome: final state: Sunny, prob: 0.0015552 2: Sunny 0.00864 1: Rainy 0.048 0: Sunny 0.24 The steps of states are Sunny Rainy Sunny Sunny with highest probability of 0.0015552 ###Markdown hmmlearnInstead of computing a Hidden Markov Model manually using the Viterbi Algorithm, we can use libraries such as hmmlearn.https://hmmlearn.readthedocs.io ###Code !pip install hmmlearn from hmmlearn import hmm # https://hmmlearn.readthedocs.io/en/latest/api.html#multinomialhmm model = hmm.MultinomialHMM(n_components=len(states)) # start_probability = {'Rainy': 0.6, 'Sunny': 0.4} model.startprob_ = np.array([0.6, 0.4]) # transition_probability = { # 'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3}, # 'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6}, # } model.transmat_ = np.array( [[0.7, 0.3,], [0.4, 0.6]] ) # emission_probability = { # 'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5}, # 'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1}, # } model.emissionprob_ = np.array([ # walk, shop, clean [0.1, 0.4, 0.5], [0.6, 0.3, 0.1] ]) # walk: 0, shop: 1, clean: 2 # X: ('walk', 'shop', 'clean') X = np.array([0, 1, 2]).reshape(-1, 1) # make into 2-D array model.fit(X) b = ['Rainy', 'Sunny'] [print(b[y]) for y in model.predict(X)]; # walk: 0, shop: 1, clean: 2 # X: ('walk', 'clean', 'walk', 'shop'), X = np.array([0, 2, 0, 1]).reshape(-1, 1) [print(b[y]) for y in model.predict(X)]; ###Output _____no_output_____ ###Markdown Introduction to Hidden Markov Modelshttps://en.wikipedia.org/wiki/Hidden_Markov_modelConsider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends.Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.![hmm](https://github.com/lisaong/hss/blob/master/assets/400px-HMMGraph.svg.png?raw=1) ###Code states = ('Rainy', 'Sunny') # initial state of the HMM (tends to rain) start_probability = {'Rainy': 0.6, 'Sunny': 0.4} ###Output _____no_output_____ ###Markdown Transition Probability: $P(a_i|a_{i-1})$This is the probability of state a[i] given a[i-1]. ###Code transition_probability = { 'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3}, 'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6}, } ###Output _____no_output_____ ###Markdown Emission Probability: $P(b_i|a_i)$This is the probability of result b[i] given state a[i] ###Code emission_probability = { 'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5}, 'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1}, } ###Output _____no_output_____ ###Markdown Hidden Markov ModelGiven what Bob did in 3 days (walk, shop, clean), what was the weather during those 3 days?$P(b_0, ..., b_{n-1}|a_0, ... , a_{n-1}) = \prod P(b_i|a_i) \prod P(a_i|a_{i-1})$ Viterbi AlgorithmThis algorithm is useful in finding the subsequence of an observation that matches best (on average) to a given Hidden Markov Model.https://en.wikipedia.org/wiki/Viterbi_algorithm Applications- Finding the most likely sequence of events (3 rainy days in a row) that caused an observation (Bob stayed home)- Finding the most likely sequence of speech phonemes that resulted in a spoken phrase- Finding the most likely sequence of poses that best matches an activity- https://en.wikipedia.org/wiki/Hidden_Markov_modelApplications ###Code from pprint import pprint import numpy as np def viterbi(obs, states, start_p, trans_p, emit_p): V = [{}] for st in states: V[0][st] = {"prob": start_p[st] * emit_p[st][obs[0]], "prev": None} print('Viterbi table at step 0:') pprint(V) # Run viterbi algorithm for step t>0 for t in range(1, len(obs)): V.append({}) for st in states: # Compute the state that results in highest probability at step t tr_probs = np.array([V[t-1][prev_st]["prob"]*trans_p[prev_st][st] for prev_st in states]) max_tr_prob = tr_probs.max() prev_st_selected = states[tr_probs.argmax()] max_prob = max_tr_prob * emit_p[st][obs[t]] V[t][st] = {"prob": max_prob, "prev": prev_st_selected} print('Viterbi table at step %s:' % t) pprint(V) print('================================\nFinal outcome:') opt = [] # The highest probability at the end of the sequence max_prob = max(value["prob"] for value in V[-1].values()) # Get most probable state and its backtrack for st, data in V[-1].items(): if data["prob"] == max_prob: opt.append(st) previous = st break print(f'final state: {previous}, prob: {max_prob}') # Follow the backtrack till the first observation for t in range(len(V)-2, -1, -1): opt.insert(0, V[t+1][previous]["prev"]) previous = V[t+1][previous]["prev"] print(f'{t}: {previous} {V[t][previous]["prob"]}') print(f'The steps of states are {" ".join(opt)} with highest probability of {max_prob}') viterbi( ('walk', 'shop', 'clean'), states, start_probability, transition_probability, emission_probability) viterbi( ('walk', 'clean', 'walk', 'shop'), states, start_probability, transition_probability, emission_probability) ###Output Viterbi table at step 0: [{'Rainy': {'prev': None, 'prob': 0.06}, 'Sunny': {'prev': None, 'prob': 0.24}}] Viterbi table at step 1: [{'Rainy': {'prev': None, 'prob': 0.06}, 'Sunny': {'prev': None, 'prob': 0.24}}, {'Rainy': {'prev': 'Sunny', 'prob': 0.048}, 'Sunny': {'prev': 'Sunny', 'prob': 0.0144}}] Viterbi table at step 2: [{'Rainy': {'prev': None, 'prob': 0.06}, 'Sunny': {'prev': None, 'prob': 0.24}}, {'Rainy': {'prev': 'Sunny', 'prob': 0.048}, 'Sunny': {'prev': 'Sunny', 'prob': 0.0144}}, {'Rainy': {'prev': 'Rainy', 'prob': 0.00336}, 'Sunny': {'prev': 'Rainy', 'prob': 0.00864}}] Viterbi table at step 3: [{'Rainy': {'prev': None, 'prob': 0.06}, 'Sunny': {'prev': None, 'prob': 0.24}}, {'Rainy': {'prev': 'Sunny', 'prob': 0.048}, 'Sunny': {'prev': 'Sunny', 'prob': 0.0144}}, {'Rainy': {'prev': 'Rainy', 'prob': 0.00336}, 'Sunny': {'prev': 'Rainy', 'prob': 0.00864}}, {'Rainy': {'prev': 'Sunny', 'prob': 0.0013824000000000002}, 'Sunny': {'prev': 'Sunny', 'prob': 0.0015552}}] ================================ Final outcome: final state: Sunny, prob: 0.0015552 2: Sunny 0.00864 1: Rainy 0.048 0: Sunny 0.24 The steps of states are Sunny Rainy Sunny Sunny with highest probability of 0.0015552 ###Markdown hmmlearnInstead of computing a Hidden Markov Model manually using the Viterbi Algorithm, we can use libraries such as hmmlearn.https://hmmlearn.readthedocs.io ###Code !pip install hmmlearn from hmmlearn import hmm # https://hmmlearn.readthedocs.io/en/latest/api.html#multinomialhmm model = hmm.MultinomialHMM(n_components=len(states)) # start_probability = {'Rainy': 0.6, 'Sunny': 0.4} model.startprob_ = np.array([0.6, 0.4]) # transition_probability = { # 'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3}, # 'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6}, # } model.transmat_ = np.array( [[0.7, 0.3,], [0.4, 0.6]] ) # emission_probability = { # 'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5}, # 'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1}, # } model.emissionprob_ = np.array([ # walk, shop, clean [0.1, 0.4, 0.5], [0.6, 0.3, 0.1] ]) # walk: 0, shop: 1, clean: 2 # X: ('walk', 'shop', 'clean') X = np.array([0, 1, 2]).reshape(-1, 1) # make into 2-D array model.fit(X) b = ['Rainy', 'Sunny'] [print(b[y]) for y in model.predict(X)]; # walk: 0, shop: 1, clean: 2 # X: ('walk', 'clean', 'walk', 'shop'), X = np.array([0, 2, 0, 1]).reshape(-1, 1) [print(b[y]) for y in model.predict(X)]; ###Output _____no_output_____
Classification/Naive Bayes/CategoricalNB_MinMaxScaler.ipynb
###Markdown Categorical Naive Bayes Classifier with MinMaxScaler This Code template is for the Classification tasks using CategoricalNB based on the Naive Bayes algorithm for categorically distributed data with feature rescaling technique MinMaxScaler in a pipeline. Required Packages ###Code !pip install imblearn import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as se from imblearn.over_sampling import RandomOverSampler from sklearn.naive_bayes import CategoricalNB from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import MinMaxScaler from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,plot_confusion_matrix warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown InitializationFilepath of CSV file ###Code #filepath file_path= "" ###Output _____no_output_____ ###Markdown List of features which are required for model training . ###Code #x_values features=[] ###Output _____no_output_____ ###Markdown Target feature for prediction. ###Code #y_value target='' ###Output _____no_output_____ ###Markdown Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ###Code df=pd.read_csv(file_path) df.head() ###Output _____no_output_____ ###Markdown Feature SelectionIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y. ###Code X = df[features] Y = df[target] ###Output _____no_output_____ ###Markdown Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ###Code def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) Y=NullClearner(Y) Y=EncodeY(Y) X=EncodeX(X) X.head() ###Output Encoded Target: ['acc' 'good' 'unacc' 'vgood'] to [0, 1, 2, 3] ###Markdown Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ###Code f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ###Output _____no_output_____ ###Markdown Distribution Of Target Variable ###Code plt.figure(figsize = (10,6)) se.countplot(Y) ###Output _____no_output_____ ###Markdown Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ###Code x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ###Output _____no_output_____ ###Markdown Handling Target ImbalanceThe challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library. ###Code x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train) ###Output _____no_output_____ ###Markdown ModelCategoricalNB implements the categorical naive Bayes algorithm for categorically distributed data. It assumes that each feature, which is described by the index , has its own categorical distribution.The categorical Naive Bayes classifier is suitable for classification with discrete features that are categorically distributed. The categories of each feature are drawn from a categorical distribution.Model Tuning Parameters1. alpha : float, default=1.0Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).2. fit_prior : bool, default=TrueWhether to learn class prior probabilities or not. If false, a uniform prior will be used.3. class_prior : array-like of shape (n_classes,), default=NonePrior probabilities of the classes. If specified the priors are not adjusted according to the data. MinMax Scaler:This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.[For more information click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) ###Code model = make_pipeline(MinMaxScaler(),CategoricalNB()) model.fit(x_train, y_train) ###Output _____no_output_____ ###Markdown Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ###Code print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ###Output Accuracy score 79.77 % ###Markdown Confusion MatrixA confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known. ###Code plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues) ###Output _____no_output_____ ###Markdown Classification ReportA Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.where:- Precision:- Accuracy of positive predictions.- Recall:- Fraction of positives that were correctly identified.- f1-score:- percent of positive predictions were correct- support:- Support is the number of actual occurrences of the class in the specified dataset. ###Code print(classification_report(y_test,model.predict(x_test))) ###Output precision recall f1-score support 0 0.58 0.83 0.69 84 1 0.48 0.77 0.59 13 2 1.00 0.78 0.88 237 3 0.55 0.92 0.69 12 accuracy 0.80 346 macro avg 0.65 0.82 0.71 346 weighted avg 0.86 0.80 0.81 346 ###Markdown Categorical Naive Bayes Classifier with MinMaxScaler This Code template is for the Classification tasks using CategoricalNB based on the Naive Bayes algorithm for categorically distributed data with feature rescaling technique MinMaxScaler in a pipeline. Required Packages ###Code !pip install imblearn import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as se from imblearn.over_sampling import RandomOverSampler from sklearn.naive_bayes import CategoricalNB from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import MinMaxScaler from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,plot_confusion_matrix warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown InitializationFilepath of CSV file ###Code #filepath file_path= "" ###Output _____no_output_____ ###Markdown List of features which are required for model training . ###Code #x_values features=[] ###Output _____no_output_____ ###Markdown Target feature for prediction. ###Code #y_value target='' ###Output _____no_output_____ ###Markdown Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ###Code df=pd.read_csv(file_path) df.head() ###Output _____no_output_____ ###Markdown Feature SelectionIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y. ###Code X = df[features] Y = df[target] ###Output _____no_output_____ ###Markdown Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ###Code def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) Y=NullClearner(Y) Y=EncodeY(Y) X=EncodeX(X) X.head() ###Output Encoded Target: ['acc' 'good' 'unacc' 'vgood'] to [0, 1, 2, 3] ###Markdown Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ###Code f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ###Output _____no_output_____ ###Markdown Distribution Of Target Variable ###Code plt.figure(figsize = (10,6)) se.countplot(Y) ###Output _____no_output_____ ###Markdown Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ###Code x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ###Output _____no_output_____ ###Markdown Handling Target ImbalanceThe challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library. ###Code x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train) ###Output _____no_output_____ ###Markdown ModelCategoricalNB implements the categorical naive Bayes algorithm for categorically distributed data. It assumes that each feature, which is described by the index , has its own categorical distribution.The categorical Naive Bayes classifier is suitable for classification with discrete features that are categorically distributed. The categories of each feature are drawn from a categorical distribution.Model Tuning Parameters1. alpha : float, default=1.0Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).2. fit_prior : bool, default=TrueWhether to learn class prior probabilities or not. If false, a uniform prior will be used.3. class_prior : array-like of shape (n_classes,), default=NonePrior probabilities of the classes. If specified the priors are not adjusted according to the data. MinMax Scaler:This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.[For more information click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) ###Code model = make_pipeline(MinMaxScaler(),CategoricalNB()) model.fit(x_train, y_train) ###Output _____no_output_____ ###Markdown Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ###Code print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ###Output Accuracy score 79.77 % ###Markdown Confusion MatrixA confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known. ###Code plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues) ###Output _____no_output_____ ###Markdown Classification ReportA Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.where:- Precision:- Accuracy of positive predictions.- Recall:- Fraction of positives that were correctly identified.- f1-score:- percent of positive predictions were correct- support:- Support is the number of actual occurrences of the class in the specified dataset. ###Code print(classification_report(y_test,model.predict(x_test))) ###Output precision recall f1-score support 0 0.58 0.83 0.69 84 1 0.48 0.77 0.59 13 2 1.00 0.78 0.88 237 3 0.55 0.92 0.69 12 accuracy 0.80 346 macro avg 0.65 0.82 0.71 346 weighted avg 0.86 0.80 0.81 346 ###Markdown Categorical Naive Bayes Classifier with MinMaxScaler Required Packages ###Code !pip install imblearn import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as se from imblearn.over_sampling import RandomOverSampler from sklearn.naive_bayes import CategoricalNB from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import MinMaxScaler from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,plot_confusion_matrix warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown InitializationFilepath of CSV file ###Code #filepath file_path= "" ###Output _____no_output_____ ###Markdown List of features which are required for model training . ###Code #x_values features=[] ###Output _____no_output_____ ###Markdown Target feature for prediction. ###Code #y_value target='' ###Output _____no_output_____ ###Markdown Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ###Code df=pd.read_csv(file_path) df.head() ###Output _____no_output_____ ###Markdown Feature SelectionIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y. ###Code X = df[features] Y = df[target] ###Output _____no_output_____ ###Markdown Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ###Code def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) Y=NullClearner(Y) Y=EncodeY(Y) X=EncodeX(X) X.head() ###Output Encoded Target: ['acc' 'good' 'unacc' 'vgood'] to [0, 1, 2, 3] ###Markdown Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ###Code f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ###Output _____no_output_____ ###Markdown Distribution Of Target Variable ###Code plt.figure(figsize = (10,6)) se.countplot(Y) ###Output _____no_output_____ ###Markdown Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ###Code x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ###Output _____no_output_____ ###Markdown Handling Target ImbalanceThe challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library. ###Code x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train) ###Output _____no_output_____ ###Markdown ModelCategoricalNB implements the categorical naive Bayes algorithm for categorically distributed data. It assumes that each feature, which is described by the index , has its own categorical distribution.The categorical Naive Bayes classifier is suitable for classification with discrete features that are categorically distributed. The categories of each feature are drawn from a categorical distribution.Model Tuning Parameters1. alpha : float, default=1.0Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).2. fit_prior : bool, default=TrueWhether to learn class prior probabilities or not. If false, a uniform prior will be used.3. class_prior : array-like of shape (n_classes,), default=NonePrior probabilities of the classes. If specified the priors are not adjusted according to the data. MinMax Scaler:This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.[For more information click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) ###Code model = make_pipeline(MinMaxScaler(),CategoricalNB()) model.fit(x_train, y_train) ###Output _____no_output_____ ###Markdown Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ###Code print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ###Output Accuracy score 79.77 % ###Markdown Confusion MatrixA confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known. ###Code plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues) ###Output _____no_output_____ ###Markdown Classification ReportA Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.where:- Precision:- Accuracy of positive predictions.- Recall:- Fraction of positives that were correctly identified.- f1-score:- percent of positive predictions were correct- support:- Support is the number of actual occurrences of the class in the specified dataset. ###Code print(classification_report(y_test,model.predict(x_test))) ###Output precision recall f1-score support 0 0.58 0.83 0.69 84 1 0.48 0.77 0.59 13 2 1.00 0.78 0.88 237 3 0.55 0.92 0.69 12 accuracy 0.80 346 macro avg 0.65 0.82 0.71 346 weighted avg 0.86 0.80 0.81 346
notebooks/03-Methods and Functions/09-Functions and Methods Homework - Solutions.ipynb
###Markdown Functions and Methods Homework Solutions____**Write a function that computes the volume of a sphere given its radius.** ###Code def vol(rad): return (4/3)*(3.14)*(rad**3) # Check vol(2) ###Output _____no_output_____ ###Markdown ___**Write a function that checks whether a number is in a given range (inclusive of high and low)** ###Code def ran_check(num,low,high): #Check if num is between low and high (including low and high) if num in range(low,high+1): print('{} is in the range between {} and {}'.format(num,low,high)) else: print('The number is outside the range.') # Check ran_check(5,2,7) ###Output 5 is in the range between 2 and 7 ###Markdown If you only wanted to return a boolean: ###Code def ran_bool(num,low,high): return num in range(low,high+1) ran_bool(3,1,10) ###Output _____no_output_____ ###Markdown ____**Write a Python function that accepts a string and calculates the number of upper case letters and lower case letters.** Sample String : 'Hello Mr. Rogers, how are you this fine Tuesday?' Expected Output : No. of Upper case characters : 4 No. of Lower case Characters : 33If you feel ambitious, explore the Collections module to solve this problem! ###Code def up_low(s): d={"upper":0, "lower":0} for c in s: if c.isupper(): d["upper"]+=1 elif c.islower(): d["lower"]+=1 else: pass print("Original String : ", s) print("No. of Upper case characters : ", d["upper"]) print("No. of Lower case Characters : ", d["lower"]) s = 'Hello Mr. Rogers, how are you this fine Tuesday?' up_low(s) ###Output Original String : Hello Mr. Rogers, how are you this fine Tuesday? No. of Upper case characters : 4 No. of Lower case Characters : 33 ###Markdown ____**Write a Python function that takes a list and returns a new list with unique elements of the first list.** Sample List : [1,1,1,1,2,2,3,3,3,3,4,5] Unique List : [1, 2, 3, 4, 5] ###Code def unique_list(lst): # Also possible to use list(set()) x = [] for a in lst: if a not in x: x.append(a) return x unique_list([1,1,1,1,2,2,3,3,3,3,4,5]) ###Output _____no_output_____ ###Markdown ____**Write a Python function to multiply all the numbers in a list.** Sample List : [1, 2, 3, -4] Expected Output : -24 ###Code def multiply(numbers): total = 1 for x in numbers: total *= x return total multiply([1,2,3,-4]) ###Output _____no_output_____ ###Markdown ____**Write a Python function that checks whether a passed string is palindrome or not.**Note: A palindrome is word, phrase, or sequence that reads the same backward as forward, e.g., madam or nurses run. ###Code def palindrome(s): s = s.replace(' ','') # This replaces all spaces ' ' with no space ''. (Fixes issues with strings that have spaces) return s == s[::-1] # Check through slicing palindrome('nurses run') palindrome('abcba') ###Output _____no_output_____ ###Markdown ____**Hard**:Write a Python function to check whether a string is pangram or not. Note : Pangrams are words or sentences containing every letter of the alphabet at least once. For example : "The quick brown fox jumps over the lazy dog"Hint: Look at the string module ###Code import string def ispangram(str1, alphabet=string.ascii_lowercase): alphaset = set(alphabet) return alphaset <= set(str1.lower()) ispangram("The quick brown fox jumps over the lazy dog") string.ascii_lowercase ###Output _____no_output_____
Python Absolute Beginner/Module_2.2_Required_Code.ipynb
###Markdown Module 2 Required Coding Activity | Requirements | |:-------------------------------| | **NOTE:** This program requires a **function** be defined, created and called. The call will send values based on user input. The function call must capture a `return` value that is used in print output. The function will have parameters and `return` a string and should otherwise use code syntax covered in module 2. | Program: fishstore()create and test fishstore()- **fishstore() takes 2 string arguments: fish & price**- **fishstore returns a string in sentence form** - **gather input for fish_entry and price_entry to use in calling fishstore()**- **print the return value of fishstore()**>example of output: **`Fish Type: Guppy costs $1`** ###Code # [ ] create, call and test fishstore() function def fishstore(fish, price): meal_cost = ('The fish ' + fish.title() + ' will cost $' + price) return meal_cost type_fish = input("What fish do you want to enter? ") cost_fish = input("what is the price of the fish? ") fishstore(type_fish, cost_fish) ###Output What fish do you want to enter? guppy what is the price of the fish? 5 ###Markdown Module 2 Required Coding Activity | Requirements | |:-------------------------------| | **NOTE:** This program requires a **function** be defined, created and called. The call will send values based on user input. The function call must capture a `return` value that is used in print output. The function will have parameters and `return` a string and should otherwise use code syntax covered in module 2. | Program: fishstore()create and test fishstore()- **fishstore() takes 2 string arguments: fish & price**- **fishstore returns a string in sentence form** - **gather input for fish_entry and price_entry to use in calling fishstore()**- **print the return value of fishstore()**>example of output: **`Fish Type: Guppy costs $1`** ###Code # [ ] create, call and test fishstore() function def fishstore(): fish_entry = input("Fish title: ") price_entry = input("Fish price: ") return "Fish Type: " + fish_entry + ", costs $" + price_entry print(fishstore()) ###Output Fish Type: Salmon, costs $25 ###Markdown Module 2 Required Coding Activity | Requirements | |:-------------------------------| | **NOTE:** This program requires a **function** be defined, created and called. The call will send values based on user input. The function call must capture a `return` value that is used in print output. The function will have parameters and `return` a string and should otherwise use code syntax covered in module 2. | Program: fishstore()create and test fishstore()- **fishstore() takes 2 string arguments: fish & price**- **fishstore returns a string in sentence form** - **gather input for fish_entry and price_entry to use in calling fishstore()**- **print the return value of fishstore()**>example of output: **`Fish Type: Guppy costs $1`** ###Code # [ ] create, call and test fishstore() function def fishstore(fish, price): return "Fish Type: " + fish.title() + " costs $" + price fish_entry = input("Enter a type of fish: ") price_entry = input("Enter the price: ") print(fishstore(fish_entry, price_entry)) ###Output Enter a type of fish: clownfish Enter the price: 1 Fish Type: Clownfish costs $1 ###Markdown Module 2 Required Coding Activity | Requirements | |:-------------------------------| | **NOTE:** This program requires a **function** be defined, created and called. The call will send values based on user input. The function call must capture a `return` value that is used in print output. The function will have parameters and `return` a string and should otherwise use code syntax covered in module 2. | Program: fishstore()create and test fishstore()- **fishstore() takes 2 string arguments: fish & price**- **fishstore returns a string in sentence form** - **gather input for fish_entry and price_entry to use in calling fishstore()**- **print the return value of fishstore()**>example of output: **`Fish Type: Guppy costs $1`** ###Code # [ ] create, call and test fishstore() function ###Output _____no_output_____ ###Markdown Module 2 Required Coding Activity | Requirements | |:-------------------------------| | **NOTE:** This program requires a **function** be defined, created and called. The call will send values based on user input. The function call must capture a `return` value that is used in print output. The function will have parameters and `return` a string and should otherwise use code syntax covered in module 2. | Program: fishstore()create and test fishstore()- **fishstore() takes 2 string arguments: fish & price**- **fishstore returns a string in sentence form** - **gather input for fish_entry and price_entry to use in calling fishstore()**- **print the return value of fishstore()**>example of output: **`Fish Type: Guppy costs $1`** ###Code # [ ] create, call and test fishstore() function def fishstore(fish, price): sentence = "Fish type: " + fish.title() + ", costs: $" + price return sentence fish_entry = input("Enter fish species: ") price_entry = input ("Enter price of fish: ") print (fishstore(fish_entry, price_entry)) ###Output Enter fish species: gold Enter price of fish: 12 ###Markdown Module 2 Required Coding Activity | Requirements | |:-------------------------------| | **NOTE:** This program requires a **function** be defined, created and called. The call will send values based on user input. The function call must capture a `return` value that is used in print output. The function will have parameters and `return` a string and should otherwise use code syntax covered in module 2. | Program: fishstore()create and test fishstore()- **fishstore() takes 2 string arguments: fish & price**- **fishstore returns a string in sentence form** - **gather input for fish_entry and price_entry to use in calling fishstore()**- **print the return value of fishstore()**>example of output: **`Fish Type: Guppy costs $1`** ###Code # [ ] create, call and test fishstore() function def fishstore(fish, price): return "Fish type: " + fish_entry + " Cost: $" + price_entry fish_entry = input("What type of fish are you selling today? ") price_entry = input("How much are you selling it for? $") print(fishstore(fish_entry, price_entry)) ###Output What type of fish are you selling today? Salmon How much are you selling it for? $12/lb Fish type: Salmon Cost: $12/lb ###Markdown Module 2 Required Coding Activity | Requirements | |:-------------------------------| | **NOTE:** This program requires a **function** be defined, created and called. The call will send values based on user input. The function call must capture a `return` value that is used in print output. The function will have parameters and `return` a string and should otherwise use code syntax covered in module 2. | Program: fishstore()create and test fishstore()- **fishstore() takes 2 string arguments: fish & price**- **fishstore returns a string in sentence form** - **gather input for fish_entry and price_entry to use in calling fishstore()**- **print the return value of fishstore()**>example of output: **`Fish Type: Guppy costs $1`** ###Code # [ ] create, call and test fishstore() function def fishstore(fish,price): (fish,price) = fish_need(fish) return ("Fish Type : " + fish + " costs $" + price) def fish_need(book): fish_entry = input("Fish Name : ") price_entry = input("Fish Cost : ") return (fish_entry,price_entry) print(fishstore("","")) ###Output Fish Name : guppy Fish Cost : 12
deeplearning1/nbs/lesson3.ipynb
###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline from imp import reload import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) import keras.backend as K K.set_image_dim_ordering('th') batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('cat.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(1000, activation='softmax') ] p=0.6 bn_model = Sequential(get_bn_layers(0.6)) bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5') def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) len(model.layers) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') model.pop() ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] print(len(fc_layers)) final_layer =fc_layers[-1] final_layer.get_config() ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) conv_layers[-1].output_shape def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('cat.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output _____no_output_____ ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(1000, activation='softmax') ] p=0.6 bn_model = Sequential(get_bn_layers(0.6)) bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5') def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/train/cats/cat.9.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] p=0.6 bn_model = Sequential(get_bn_layers(0.6)) bn_model.load_weights(model_path + 'finetune3.h5') def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6)) bn_model_vgg = Vgg16BN() bn_model = Sequential(bn_model_vgg.model.layers[+1:] bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline from importlib import reload import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=32 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code ??vgg_ft ###Output _____no_output_____ ###Markdown ```pydef vgg_ft(out_dim): vgg = Vgg16() vgg.ft(out_dim) model = vgg.model return model``` ###Code ??Vgg16.ft ###Output _____no_output_____ ###Markdown ```pydef ft(self, num): """ Replace the last layer of the model with a Dense (fully connected) layer of num neurons. Will also lock the weights of all layers except the new layer so that we only learn weights for the last layer in subsequent training. Args: num (int): Number of neurons in the Dense layer Returns: None """ model = self.model model.pop() for layer in model.layers: layer.trainable=False model.add(Dense(num, activation='softmax')) self.compile()``` ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. Convolution layers take a lot of time to compute, but Dense layers do not.We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers # find the last convolution layer last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.000001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from __future__ import division, print_function %matplotlib inline from importlib import reload # Python 3 import utils; reload(utils) from utils import * #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) #batch_size=1 batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) #build a new vgg model while keeping preweight # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) steps_per_epoch = int(np.ceil(batches.samples/batch_size)) validation_steps = int(np.ceil(val_batches.samples/batch_size)) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, validation_steps) trn_features = conv_model.predict_generator(batches, steps_per_epoch) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. If during training we set nodes to be inactive with probability p = 0.5, we now will have to make the weights twice as small as during training. The intuition behind this is very simple - if during train time we were teaching our network to predict '1' in the subsequent layer utilizing only 50% of its weights, now that it has all the weights at its disposal the contribution of each weight needs to only be half as big! ###Code # SINCE KERAS MAKES USE OF INVERTED DROPOUT WE "NEUTRALIZE" proc_wgts(layer): def proc_wgts(layer): return [o for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) # or just l1.set_weights(l2.get_weights()) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, epochs=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, data_format='channels_last') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread(path+'cat.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # If we cheanged it then ensure that we return to theano dimension ordering # K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) steps_per_epoch = int(np.ceil(batches.samples/batch_size)) validation_steps = int(np.ceil(val_batches.samples/batch_size)) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8, validation_data=val_batches, validation_steps=validation_steps) conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=3, validation_data=val_batches, validation_steps=validation_steps) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(1000, activation='softmax') ] p=0.6 bn_model = Sequential(get_bn_layers(0.6)) # where is this file? # bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5') # this weight is only for fully connected layers + BN # SINCE KERAS MAKES USE OF INVERTED DROPOUT WE "NEUTRALIZE" proc_wgts(layer): def proc_wgts(layer, prev_p, new_p): scal = 1 return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, epochs=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, validation_data=val_batches, validation_steps=validation_steps) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, validation_data=val_batches, validation_steps=validation_steps) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, validation_data=val_batches, validation_steps=validation_steps) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown 训练更好的模型 ###Code # import os # os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152 # os.environ["CUDA_VISIBLE_DEVICES"] = "" %matplotlib inline import imp import utils imp.reload(utils) from utils import * #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=16 ###Output _____no_output_____ ###Markdown 模型欠拟合了吗? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) 移除 Dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:找到最后一个卷积层 ###Code model.layers model.summary() layers = model.layers for index,layer in enumerate(layers): if type(layer) is Conv2D: print("1") [index for index,layer in enumerate(layers) if type(layer) is Conv2D] # 找到最后一个卷积层 last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Conv2D][-1] last_conv_idx layers[last_conv_idx] layers[:last_conv_idx+1] layers[last_conv_idx+1:] conv_layers = layers[:last_conv_idx+1] # 所有卷积层 conv_model = Sequential(conv_layers) # 建立顺序模型-卷积部分 # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] # 全联接层 ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code path batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_classes trn_classes batches.class_indices # 调用卷积层的predict_generator,提取验证集的特征 #val_features = conv_model.predict_generator(val_batches,steps=val_batches.n // batch_size) #会导致最后数量少了 val_features = conv_model.predict_generator(val_batches) save_array(model_path + 'valid_convlayer_features.bc', val_features) # 调用卷积层的predict_generator,提取训练集的特征,gpu太烂玩不了,换了1080ti能跑了! trn_features = conv_model.predict_generator(batches,verbose=1) save_array(model_path + 'train_convlayer_features.bc', trn_features) val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features = load_array(model_path+'train_convlayer_features.bc') print(trn_features.shape) print(val_features.shape) ###Output (23000, 512, 14, 14) (2000, 512, 14, 14) ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.对于我们新的完全连接的模型,我们将使用与VGG 16的最后一层完全相同的架构来创建它,以便我们可以方便地从该模型中复制经过预先训练的权重。但是,我们会将dropout层的p值设为0,以便有效地去除dropout。 ###Code # Copy the weights from the pre-trained model. # 因为移除的dropout 所以把各层权重减低为一半 def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # 像这样非常细微的调整,学习率要设定的非常小 opt = RMSprop(lr=0.00001, rho=0.7) fc_layers def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.),# 0就是删除了 Dense(4096, activation='relu'), Dropout(0.),# 0就是删除了 Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) # 把预训练好的模型的权重直接赋予给新的模型, model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code trn_labels.shape fc_model.fit(trn_features, trn_labels, epochs=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown 降低过拟合 我们已经让模型过拟合了,现在采取措施减少过拟合 降低过拟合的途径 We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. 关于数据增强 Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True) ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code test = ndimage.imread('data/dogscats/test1/unknown/7.jpg') test.shape np.expand_dims(test,0).shape # 建立“batch”的单个图片 img = np.expand_dims(ndimage.imread('data/dogscats/test1/unknown/7.jpg'),0) aug_iter = gen.flow(img) type(aug_iter) aug_iter # 迭代 next(aug_iter)[0].astype(np.uint8) # 获得单个生成图片 # 获得8个生成的增强图片 aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] len(aug_imgs) img.shape # 画出原始图片 plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # 打出所有图 plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown 增加数据增强 Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable:当使用数据增强时,我们不能预先计算卷积层的特征,因为每个输入图像都进行了随机变化。也就是说,即使训练过程多次看到相同的图像,每次都会进行不同的数据增强,卷积层的结果也会不同。因此,为了让数据流过所有的conv层和我们新的全联接层,我们将完全连接的模型附加到卷积模型中——在确保卷积层不可训练之后: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # 卷积层+新的全联接层,卷积层不训练 conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches,batches.n // batch_size,epochs=8, validation_data=val_batches,validation_steps=val_batches.n//batch_size) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown 批量标准化Batch normalization 关于批量标准化 Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. 为模型添加批量标准化层 We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096,activation='relu'), BatchNormalization(), Dropout(p), Dense(4096,activation='relu'), BatchNormalization(), Dropout(p), Dense(1000,activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from myvgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, epochs=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() final_model.fit_generator(batches,steps_per_epoch=batches.n // batch_size,epochs=1, validation_data=val_batches, validation_steps = val_batches.n // batch_size) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('cat.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(1000, activation='softmax') ] p=0.6 bn_model = Sequential(get_bn_layers(0.6)) bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5') def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('cat.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(1000, activation='softmax') ] p=0.6 bn_model = Sequential(get_bn_layers(0.6)) bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5') def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; from importlib import reload; reload(utils); from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output F:\Dropbox\Dropbox\Projects\GitHub\courses\deeplearning1\nbs\vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) F:\Dropbox\Dropbox\Projects\GitHub\courses\deeplearning1\nbs\vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) F:\Dropbox\Dropbox\Projects\GitHub\courses\deeplearning1\nbs\vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(128, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) F:\Dropbox\Dropbox\Projects\GitHub\courses\deeplearning1\nbs\vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(256, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) F:\Dropbox\Dropbox\Projects\GitHub\courses\deeplearning1\nbs\vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(512, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) steps_per_epoch = int(np.ceil(batches.samples/batch_size)) validation_steps = int(np.ceil(val_batches.samples/batch_size)) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, validation_steps) trn_features = conv_model.predict_generator(batches, steps_per_epoch) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, data_format='channels_last') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test1/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8, validation_data=val_batches, validation_steps=validation_steps) conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=3, validation_data=val_batches, validation_steps=validation_steps) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) # SINCE KERAS MAKES USE OF INVERTED DROPOUT WE "NEUTRALIZE" proc_wgts(layer): def proc_wgts(layer, prev_p, new_p): scal = 1 return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, validation_data=val_batches, validation_steps=validation_steps) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, validation_data=val_batches, validation_steps=validation_steps) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, validation_data=val_batches, validation_steps=validation_steps) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') for ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" # path = "data/dogscats/" path = "../../homework/data/dogscats/sample" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline from importlib import reload import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code ??vgg_ft model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code # from theano.sandbox import cuda %matplotlib inline import utils; # reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "/storage/pradeep/data/dogscats/" model_path = path + 'models_2016/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output /storage/pradeep/fastai_2016/deeplearning1/nbs/vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) /storage/pradeep/fastai_2016/deeplearning1/nbs/vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(128, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) /storage/pradeep/fastai_2016/deeplearning1/nbs/vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(256, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) /storage/pradeep/fastai_2016/deeplearning1/nbs/vgg16.py:100: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(512, (3, 3), activation="relu")` model.add(Convolution2D(filters, 3, 3, activation='relu')) ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code # from theano.sandbox import cuda %matplotlib inline from importlib import reload import utils; reload(utils) from utils import * from __future__ import division, print_function path = "data/dogscats/sample/" # path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=2 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, math.ceil(val_batches.samples/val_batches.batch_size)) trn_features = conv_model.predict_generator(batches, math.ceil(batches.samples/batches.batch_size)) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model import tensorflow as tf with tf.device('/cpu:0'): fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code import tensorflow as tf with tf.device('/cpu:0'): fc_model.fit(trn_features, trn_labels, epochs=2, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient K.set_image_dim_ordering('tf') gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, # data_format="channels_last", channel_shift_range=10., horizontal_flip=True) ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) ###Output Found 64 images belonging to 2 classes. Found 64 images belonging to 2 classes. ###Markdown *** NB: We don't want to augment or shuffle the validation set *** ###Code val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output _____no_output_____ ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code import tensorflow as tf with tf.device('/cpu:0'): fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.samples, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.samples) conv_model.fit_generator(batches, samples_per_epoch=batches.samples, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.samples) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') # Finally, try with data augmentation bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.samples, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.samples) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.samples, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.samples) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.samples, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.samples) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Resnet ###Code from importlib import reload import resnet50; reload(resnet50) from resnet50 import Resnet50 model_path = model_path + 'resnet/' if not os.path.exists(model_path): os.mkdir(model_path) rn0 = Resnet50(include_top=False).model rn0.output_shape[1:] batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) val_features = rn0.predict_generator(val_batches, math.ceil(val_batches.samples/val_batches.batch_size)) trn_features = rn0.predict_generator(batches, math.ceil(batches.samples/batches.batch_size)) save_array(model_path + 'trn_rn0_conv.bc', trn_features) save_array(model_path + 'val_rn0_conv.bc', val_features) trn_features = load_array(model_path + 'trn_rn0_conv.bc') val_features = load_array(model_path + 'val_rn0_conv.bc') ###Output _____no_output_____ ###Markdown FC net ###Code def get_fc_layers(p): return [ BatchNormalization(axis=1, input_shape=rn0.output_shape[1:]), Flatten(), Dropout(p), Dense(1024, activation='relu'), BatchNormalization(), Dropout(p/2), Dense(1024, activation='relu'), BatchNormalization(), Dropout(p), Dense(2, activation='softmax') ] model = Sequential(get_fc_layers(0.5)) model.summary() model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.fit(trn_features, trn_labels, epochs=1, batch_size=batch_size, validation_data=(val_features, val_labels)) ###Output Train on 64 samples, validate on 64 samples Epoch 1/1 ###Markdown Global average pooling ###Code def get_ap_layers(p): return [ GlobalAveragePooling2D(input_shape=rn0.output_shape[1:]), Dropout(p), Dense(2, activation='softmax') ] model = Sequential(get_ap_layers(0.2)) model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.fit(trn_features, trn_labels, epochs=1, batch_size=batch_size, validation_data=(val_features, val_labels)) ###Output Train on 64 samples, validate on 64 samples Epoch 1/1 64/64 [==============================] - 3s 50ms/step - loss: 0.5736 - acc: 0.7500 - val_loss: 0.5201 - val_acc: 0.7344 ###Markdown Resnet large ###Code rn0 = Resnet50(include_top=False, size=(400,400)).model rn0.output_shape[1:] batches = get_batches(path+'train', shuffle=False, batch_size=batch_size, target_size=(400,400)) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size, target_size=(400,400)) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) val_features = rn0.predict_generator(val_batches, math.ceil(val_batches.samples/val_batches.batch_size)) trn_features = rn0.predict_generator(batches, math.ceil(batches.samples/batches.batch_size)) save_array(model_path + 'trn_rn0_conv_lrg.bc', trn_features) save_array(model_path + 'val_rn0_conv_lrg.bc', val_features) trn_features = load_array(model_path + 'trn_rn0_conv_lrg.bc') val_features = load_array(model_path + 'val_rn0_conv_lrg.bc') model = Sequential(get_ap_layers(0.01)) model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.fit(trn_features, trn_labels, epochs=1, batch_size=batch_size, validation_data=(val_features, val_labels)) ###Output Train on 64 samples, validate on 64 samples Epoch 1/1 64/64 [==============================] - 1s 18ms/step - loss: 0.8954 - acc: 0.5469 - val_loss: 0.5918 - val_acc: 0.6562 ###Markdown Training a better model ###Code from __future__ import division, print_function %matplotlib inline from importlib import reload # Python 3 import utils; reload(utils) from utils import * #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) #batch_size=1 batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) steps_per_epoch = int(np.ceil(batches.samples/batch_size)) validation_steps = int(np.ceil(val_batches.samples/batch_size)) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) val_features = conv_model.predict_generator(val_batches, validation_steps) trn_features = conv_model.predict_generator(batches, steps_per_epoch) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # SINCE KERAS MAKES USE OF INVERTED DROPOUT WE "NEUTRALIZE" proc_wgts(layer): def proc_wgts(layer): return [o for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, epochs=8, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, data_format='channels_last') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread(path+'cat.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # If we cheanged it then ensure that we return to theano dimension ordering # K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) steps_per_epoch = int(np.ceil(batches.samples/batch_size)) validation_steps = int(np.ceil(val_batches.samples/batch_size)) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8, validation_data=val_batches, validation_steps=validation_steps) conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=3, validation_data=val_batches, validation_steps=validation_steps) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(4096, activation='relu'), Dropout(p), BatchNormalization(), Dense(1000, activation='softmax') ] p=0.6 bn_model = Sequential(get_bn_layers(0.6)) # where is this file? # bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5') # SINCE KERAS MAKES USE OF INVERTED DROPOUT WE "NEUTRALIZE" proc_wgts(layer): def proc_wgts(layer, prev_p, new_p): scal = 1 return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, epochs=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, validation_data=val_batches, validation_steps=validation_steps) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, validation_data=val_batches, validation_steps=validation_steps) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, validation_data=val_batches, validation_steps=validation_steps) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____ ###Markdown Training a better model ###Code from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function #path = "data/dogscats/sample/" path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 ###Output _____no_output_____ ###Markdown Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:1. How is this possible?2. Is this desirable?The answer to (1) is that this is happening because of *dropout*. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability *p* (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)- Split the model between the convolutional (*conv*) layers and the dense layers- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch- Create a new model with just the dense layers, and dropout p set to zero- Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent... ###Code model = vgg_ft(2) ###Output _____no_output_____ ###Markdown ...and load our fine-tuned weights. ###Code model.load_weights(model_path+'finetune3.h5') ###Output _____no_output_____ ###Markdown We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the *Flatten()* layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer: ###Code layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:] ###Output _____no_output_____ ###Markdown Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way! ###Code batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) batches.class_indices val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) val_features.shape trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape ###Output _____no_output_____ ###Markdown For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout. ###Code # Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] #def proc_wgts(layer): return [o for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model() ###Output _____no_output_____ ###Markdown And fit the model in the usual way: ###Code fc_model.fit(trn_features, trn_labels, nb_epoch=2, batch_size=batch_size, validation_data=(val_features, val_labels)) fc_model.save_weights(model_path+'no_dropout.h5') fc_model.load_weights(model_path+'no_dropout.h5') ###Output _____no_output_____ ###Markdown Reducing overfitting Now that we've gotten the model to overfit, we can take a number of steps to reduce this. Approaches to reducing overfitting We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):1. Add more data2. Use data augmentation3. Use architectures that generalize well4. Add regularization5. Reduce architecture complexity.We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)We recommend *always* using at least some light data augmentation, unless you have so much data that your model will never see the same input twice. About data augmentation Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation: ###Code # dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf') ###Output _____no_output_____ ###Markdown Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested). ###Code # Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get eight examples of these augmented images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) ###Output _____no_output_____ ###Markdown As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches. ###Code # Augmented data plots(aug_imgs, (20,7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th') ###Output _____no_output_____ ###Markdown Adding data augmentation Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it: ###Code gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) # NB: We don't want to augment or shuffle the validation set val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. ###Markdown When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable: ###Code fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False # Look how easy it is to connect two models together! conv_model.add(fc_model) ###Output _____no_output_____ ###Markdown Now we can compile, train, and save our model as usual - note that we use *fit_generator()* since we want to pull random images from the directories on every batch. ###Code conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5') ###Output _____no_output_____ ###Markdown Batch normalization About batch normalization Batch normalization (*batchnorm*) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called *normalization*. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that **all modern networks should use batchnorm, or something equivalent**. There are two reasons for this:1. Adding batchnorm to a model can result in **10x or more improvements in training speed**2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to **reduce overfitting**. As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers): ###Code conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the Vgg16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p=0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2,activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2,activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1,l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') final_model.optimizer.lr=0.001 final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) bn_model.save_weights(model_path + 'final3.h5') ###Output _____no_output_____
Mathematics/Mathematical Modeling/07.06-Path-Planning-for-a-Simple-Car.ipynb
###Markdown Path Planning for a Simple Car Required InstallationsIf run on Google Colab, it is necessary to install any needed solvers for each Colab session. The following cell tests if the notebook is run on Google Colab, then installs Pyomo and Ipopt if not already installed. ###Code try: import google.colab try: from pyomo.environ import * except: !pip install -q pyomo if not 'ipopt_executable' in vars(): !wget -N -q "https://ampl.com/dl/open/ipopt/ipopt-linux64.zip" !unzip -o -q ipopt-linux64 ipopt_executable = '/content/ipopt' except: pass ###Output _____no_output_____ ###Markdown Kinematic Model The following equations describe a simple model of a car\begin{align}\frac{dx}{dt} & = v \cos(\theta) \\\frac{dy}{dt} & = v \sin(\theta) \\\frac{d\theta}{dt} & = \frac{v}{L}\tan(\phi) \\\end{align}where $x$ and $y$ denote the position of the center of the rear axle, $\theta$ is the angle of the car axis to the horizontal, $v$ is velocity, and $\phi$ is the angle of the front steering wheels to the car axis. The length $L$ is the distance from the center of the rear axle to the center of the front axle.The velocity $v$ is controlled by acceleration of the car, the position of the wheels is controlled by the rate limited steering input $v$.\begin{align}\frac{dv}{dt} & = a \\\frac{d\phi}{dt} & = u\end{align}The state of the car is determined by the value of the five state variables $x$, $y$, $\theta$, $v$, and $\phi$.The path planning problem is to find find values of the manipulable variables $a(t)$ and $u(t)$ on a time interval $0 \leq t \leq t_f$ to drive the car from an initial condition $\left[x(0), y(0), \theta(0), v(0), \phi(0)\right]$ to a specified final condition $\left[x(t_f), y(t_f), \theta(t_f), v(t_f), \phi(t_f)\right]$ that minimizes an objective function:\begin{align}J = \min \int_0^{t_f} \left( \phi(t)^2 + \alpha a(t)^2 + \beta u(t)^2\right)\,dt\end{align}and which satisfy operational constraints\begin{align*}| u | & \leq u_{max}\end{align*} Pyomo Model ###Code from pyomo.environ import * from pyomo.dae import * L = 2 tf = 50 # create a model object m = ConcreteModel() # define the independent variable m.t = ContinuousSet(bounds=(0, tf)) # define control inputs m.a = Var(m.t) m.u = Var(m.t, domain=Reals, bounds=(-0.1,0.1)) # define the dependent variables m.x = Var(m.t) m.y = Var(m.t) m.theta = Var(m.t) m.v = Var(m.t) m.phi = Var(m.t, domain=Reals, bounds=(-0.5,0.5)) m.xdot = DerivativeVar(m.x) m.ydot = DerivativeVar(m.y) m.thetadot = DerivativeVar(m.theta) m.vdot = DerivativeVar(m.v) m.phidot = DerivativeVar(m.phi) # define the differential equation as a constraint m.ode_x = Constraint(m.t, rule=lambda m, t: m.xdot[t] == m.v[t]*cos(m.theta[t])) m.ode_y = Constraint(m.t, rule=lambda m, t: m.ydot[t] == m.v[t]*sin(m.theta[t])) m.ode_t = Constraint(m.t, rule=lambda m, t: m.thetadot[t] == m.v[t]*tan(m.phi[t])/L) m.ode_u = Constraint(m.t, rule=lambda m, t: m.vdot[t] == m.a[t]) m.ode_p = Constraint(m.t, rule=lambda m, t: m.phidot[t] == m.u[t]) # path constraints m.path_x1 = Constraint(m.t, rule=lambda m, t: m.x[t] >= 0) m.path_y1 = Constraint(m.t, rule=lambda m, t: m.y[t] >= 0) # initial conditions m.ic = ConstraintList() m.ic.add(m.x[0]==0) m.ic.add(m.y[0]==0) m.ic.add(m.theta[0]==0) m.ic.add(m.v[0]==0) m.ic.add(m.phi[0]==0) # final conditions m.fc = ConstraintList() m.fc.add(m.x[tf]==0) m.fc.add(m.y[tf]==20) m.fc.add(m.theta[tf]==0) m.fc.add(m.v[tf]==0) m.fc.add(m.phi[tf]==0) # define the optimization objective m.integral = Integral(m.t, wrt=m.t, rule=lambda m, t: 0.2*m.phi[t]**2 + m.a[t]**2 + m.u[t]**2) m.obj = Objective(expr=m.integral) # transform and solve TransformationFactory('dae.collocation').apply_to(m, wrt=m.t, nfe=3, ncp=12, method='BACKWARD') SolverFactory('ipopt', executable=ipopt_executable).solve(m).write() ###Output _____no_output_____ ###Markdown Accessing Solution Data ###Code # access the results t= [t for t in m.t] a = [m.a[t]() for t in m.t] u = [m.u[t]() for t in m.t] x = [m.x[t]() for t in m.t] y = [m.y[t]() for t in m.t] theta = [m.theta[t]() for t in m.t] v = [m.v[t]() for t in m.t] phi = [m.phi[t]() for t in m.t] ###Output _____no_output_____ ###Markdown Visualizing Car Path ###Code % matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set() scl=0.3 def draw_car(x=0, y=0, theta=0, phi=0): R = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) car = np.array([[0.2, 0.5], [-0.2, 0.5], [0, 0.5], [0, -0.5], [0.2, -0.5], [-0.2, -0.5], [0, -0.5], [0, 0], [L, 0], [L, 0.5], [L + 0.2*np.cos(phi), 0.5 + 0.2*np.sin(phi)], [L - 0.2*np.cos(phi), 0.5 - 0.2*np.sin(phi)], [L, 0.5],[L, -0.5], [L + 0.2*np.cos(phi), -0.5 + 0.2*np.sin(phi)], [L - 0.2*np.cos(phi), -0.5 - 0.2*np.sin(phi)]]) carz= scl*R.dot(car.T) plt.plot(x + carz[0], y + carz[1], 'k', lw=2) plt.plot(x, y, 'k.', ms=10) plt.figure(figsize=(10,10)) for xs,ys,ts,ps in zip(x,y,theta,phi): draw_car(xs, ys, ts, scl*ps) plt.plot(x, y, 'r--', lw=0.8) plt.axis('square') plt.figure(figsize=(10,8)) plt.subplot(311) plt.plot(t, a, t, u) plt.legend(['Acceleration','Steering Input']) plt.subplot(312) plt.plot(t, phi, t, theta) plt.legend(['Wheel Position','Car Direction']) plt.subplot(313) plt.plot(t, v) plt.legend(['Velocity']) ###Output _____no_output_____
t81_558_class_09_3_transfer_cv.ipynb
###Markdown T81-558: Applications of Deep Neural Networks**Module 9: Regularization: L1, L2 and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 9 Material* Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=WLlP6S-Z8Xs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_1_keras_transfer.ipynb)* Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=ctVA1_46YEE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_2_popular_transfer.ipynb)* **Part 9.3: Transfer Learning for Computer Vision and Keras** [[Video]](https://www.youtube.com/watch?v=61vMUm_XBMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_3_transfer_cv.ipynb)* Part 9.4: Transfer Learning for Languages and Keras [[Video]](https://www.youtube.com/watch?v=ajmAAg9FxXA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_4_transfer_nlp.ipynb)* Part 9.5: Transfer Learning for Keras Feature Engineering [[Video]](https://www.youtube.com/watch?v=Dttxsm8zpL8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_5_transfer_feature_eng.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code # Start CoLab try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 9.3: Transfer Learning for Computer Vision and KerasIn this part we will make use of transfer learning to create a simple neural network that can recognize dog breeds. To keep the example simple, we will only train for a handfull of breeds. A much more advanced form of this model can be found at the [Microsoft Dog Breed Image Search](https://www.bing.com/visualsearch/Microsoft/WhatDog).To keep computation times to a minimum, we will make use of the MobileNet, which is built into Keras. We will begin by loading the entire MobileNet and seeing how well it classifies with several test images. MobileNet can classify 1,000 different images. We will ultimatly extend it to classify image types that are not in its dataset, in this example 3 dog breeds. However, we begin by classifying image types amoung those that it was trained on. Even though our test images were not in its training set, the loaded neural network should be able to classify them. ###Code import pandas as pd import numpy as np import os import tensorflow.keras import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense,GlobalAveragePooling2D from tensorflow.keras.applications import MobileNet from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.mobilenet import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ###Output _____no_output_____ ###Markdown We begin by downloading weights for a MobileNet trained for the imagenet dataset. This will take some time to download the first time you train the network. ###Code model = MobileNet(weights='imagenet',include_top=True) ###Output Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.6/mobilenet_1_0_224_tf.h5 17227776/17225924 [==============================] - 1s 0us/step ###Markdown The loaded network is a Keras neural network, just like those that we've been working with so far. However, this is a neural network that was trained/engineered on advanced hardware. Simply looking at the structure of an advanced state-of-the-art neural network can be educational. ###Code model.summary() ###Output Model: "mobilenet_1.00_224" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, 112, 112, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, 112, 112, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, 112, 112, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, 112, 112, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, 112, 112, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, 112, 112, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, 112, 112, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, 113, 113, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, 56, 56, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, 56, 56, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, 56, 56, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, 56, 56, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, 56, 56, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, 56, 56, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, 57, 57, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, 28, 28, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, 28, 28, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, 28, 28, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, 28, 28, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, 28, 28, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, 28, 28, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, 29, 29, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, 14, 14, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, 14, 14, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, 14, 14, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, 14, 14, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, 15, 15, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, 7, 7, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, 7, 7, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, 7, 7, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, 7, 7, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, 7, 7, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, 7, 7, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 1024) 0 _________________________________________________________________ reshape_1 (Reshape) (None, 1, 1, 1024) 0 _________________________________________________________________ dropout (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ conv_preds (Conv2D) (None, 1, 1, 1000) 1025000 _________________________________________________________________ reshape_2 (Reshape) (None, 1000) 0 _________________________________________________________________ act_softmax (Activation) (None, 1000) 0 ================================================================= Total params: 4,253,864 Trainable params: 4,231,976 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown Just examining the above structure, several clues to neural network architecture become evident.Notice how some of the layers have zeros in their number of parameters. Items which are hyperparameters are always zero, nothing about that layer is learned. The other layers have learnable paramaters that are adjusted as training occurs. The layer types are all hyperparamaters, Keras will not change a convolution layer to a max pooling layer for you. However, the layers that have parameters are trained/adjusted by the traning algorithm. Most of the parameters seen above are the weights of the neural network.Some of the parameters are maked as non-trainable. These cannot be adjusted by the training algorithm. When we later use transfer learning with this model we will strip off the final layers that classify 1000 items and replace with our 3 dog breed classification layer. Only our new layers will be trainable, we will mark the existing layers as non-trainable.The Relu activation function is used throught the neural network. Also batch and dropout normalization are used. We cannot see the percent used for batch normalization, that might be specified in the origional paper. Many deep neural networks are pyramid shaped, and this is the case for this one. This neural network uses and expanding pyramid shape as you can see the neuron/filter counts expand from 32 to 64 to 128 to 256 to 512 and max out at 1,024.We will now use the MobileNet to classify several image URL's below. You can add additional URL's of your own to see how well the MobileNet can classify. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://cdn.shopify.com/s/files/1/0712/4751/products/SMA-01_2000x.jpg?v=1537468751", "https://farm2.static.flickr.com/1394/967537586_87b1358ad3.jpg", "https://sites.wustl.edu/jeffheaton/files/2016/07/jheaton_wustl1-262izm5-458x458.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/"\ "TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) lst = decode_predictions(pred, top=5) for itm in lst[0]: print(itm) ###Output _____no_output_____ ###Markdown Overall, the neural network is doing quite well. However, it does not classify me as a "person", rather I am classified as a "suit". Similarly, my English Bulldog Hickory is classiified as a "pug". This is likely because I am only providiing a closeup of his face. For many applications, MobileNet might be entirely acceptable as an image classifier. However, if you need to classify very specialized images that are not in the 1,000 image types supported by imagenet, it is necessary to use transfer learning. TransferIt is possable to create your own image classification network from scratch. This would take considerable time and resources. Just creating a dog breed classifier would require many pictures of dogs, labeled by breed. By using a pretrained neural network, you are tapping into knowldge already built into the lower layaers of the nerual network. The transferred layers likely already have some notion of eyes, ears, feet, and fur. These lower level concepts help to train the neural network to identify dog breeds.Next we reload the MobileNet; however, this time we set the *include_top* parameter to *False*. This instructs Keras to not load the final classification layers. This is the common mode of operation for transfer learning. We display a summary to see that the top classification layer is now missing. ###Code base_model=MobileNet(weights='imagenet',include_top=False) #imports the mobilenet model and discards the last 1000 neuron layer. base_model.summary() ###Output C:\Users\jheaton\Miniconda3\envs\tensorflow\lib\site-packages\keras_applications\mobilenet.py:207: UserWarning: `input_shape` is undefined or non-square, or `rows` is not in [128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default. warnings.warn('`input_shape` is undefined or non-square, ' ###Markdown We will add new top layers to the neural network. Our final SoftMax layer includes support for 3 classes. ###Code x=base_model.output x=GlobalAveragePooling2D()(x) x=Dense(1024,activation='relu')(x) x=Dense(1024,activation='relu')(x) preds=Dense(3,activation='softmax')(x) ###Output _____no_output_____ ###Markdown Next, we mark the origional MobileNet layers as non-trainable and our new layers as trainable. ###Code model=Model(inputs=base_model.input,outputs=preds) for layer in model.layers[:20]: layer.trainable=False for layer in model.layers[20:]: layer.trainable=True ###Output _____no_output_____ ###Markdown To train the neural network we must create a directory structure to hold the images. The Keras command **flow_from_directory** performs this for us. It requires that a folder be laid out as follows:imageEach class is a folder that contains images of that class. We can also specify a target size, in this case the origional MobileNet size of 224x224 is desired. ###Code if COLAB: PATH = "" else: PATH = "./data/transfer" train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) train_generator=train_datagen.flow_from_directory('c:\\jth\\data\\trans', target_size=(224,224), color_mode='rgb', batch_size=1, class_mode='categorical', shuffle=True) ###Output _____no_output_____ ###Markdown We are now ready to compile and fit the neural network. Notice we are using **fit_generator** rather than **fit**. This is because we are using the convienent **ImageDataGenerator**. ###Code model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy']) step_size_train=train_generator.n//train_generator.batch_size model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train, epochs=50) ###Output _____no_output_____ ###Markdown We are now ready to see how our new model can predict dog breeds. The URLs in the code below provide several example dogs to look at. Feel free to add your own. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a8/02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg/2560px-02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg", "https://upload.wikimedia.org/wikipedia/commons/5/51/DSHwiki.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Axel%2C_the_English_Bulldog.jpg/440px-Axel%2C_the_English_Bulldog.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg", "https://thehappypuppysite.com/wp-content/uploads/2017/12/poodle1.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Pudel_Grossschwarz.jpg/440px-Pudel_Grossschwarz.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 9: Regularization: L1, L2 and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 9 Material* Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=WLlP6S-Z8Xs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_1_keras_transfer.ipynb)* Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=ctVA1_46YEE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_2_popular_transfer.ipynb)* **Part 9.3: Transfer Learning for Computer Vision and Keras** [[Video]](https://www.youtube.com/watch?v=61vMUm_XBMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_3_transfer_cv.ipynb)* Part 9.4: Transfer Learning for Languages and Keras [[Video]](https://www.youtube.com/watch?v=ajmAAg9FxXA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_4_transfer_nlp.ipynb)* Part 9.5: Transfer Learning for Keras Feature Engineering [[Video]](https://www.youtube.com/watch?v=Dttxsm8zpL8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_5_transfer_feature_eng.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code # Start CoLab try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 9.3: Transfer Learning for Computer Vision and KerasIn this part we will make use of transfer learning to create a simple neural network that can recognize dog breeds. To keep the example simple, we will only train for a handfull of breeds. A much more advanced form of this model can be found at the [Microsoft Dog Breed Image Search](https://www.bing.com/visualsearch/Microsoft/WhatDog).To keep computation times to a minimum, we will make use of the MobileNet, which is built into Keras. We will begin by loading the entire MobileNet and seeing how well it classifies with several test images. MobileNet can classify 1,000 different images. We will ultimatly extend it to classify image types that are not in its dataset, in this example 3 dog breeds. However, we begin by classifying image types amoung those that it was trained on. Even though our test images were not in its training set, the loaded neural network should be able to classify them. ###Code import pandas as pd import numpy as np import os import tensorflow.keras import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense,GlobalAveragePooling2D from tensorflow.keras.applications import MobileNet from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.mobilenet import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ###Output _____no_output_____ ###Markdown We begin by downloading weights for a MobileNet trained for the imagenet dataset. This will take some time to download the first time you train the network. ###Code model = MobileNet(weights='imagenet',include_top=True) ###Output Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.6/mobilenet_1_0_224_tf.h5 17227776/17225924 [==============================] - 1s 0us/step ###Markdown The loaded network is a Keras neural network, just like those that we've been working with so far. However, this is a neural network that was trained/engineered on advanced hardware. Simply looking at the structure of an advanced state-of-the-art neural network can be educational. ###Code model.summary() ###Output Model: "mobilenet_1.00_224" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, 112, 112, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, 112, 112, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, 112, 112, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, 112, 112, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, 112, 112, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, 112, 112, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, 112, 112, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, 113, 113, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, 56, 56, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, 56, 56, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, 56, 56, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, 56, 56, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, 56, 56, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, 56, 56, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, 57, 57, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, 28, 28, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, 28, 28, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, 28, 28, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, 28, 28, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, 28, 28, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, 28, 28, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, 29, 29, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, 14, 14, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, 14, 14, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, 14, 14, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, 14, 14, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, 15, 15, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, 7, 7, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, 7, 7, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, 7, 7, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, 7, 7, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, 7, 7, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, 7, 7, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 1024) 0 _________________________________________________________________ reshape_1 (Reshape) (None, 1, 1, 1024) 0 _________________________________________________________________ dropout (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ conv_preds (Conv2D) (None, 1, 1, 1000) 1025000 _________________________________________________________________ reshape_2 (Reshape) (None, 1000) 0 _________________________________________________________________ act_softmax (Activation) (None, 1000) 0 ================================================================= Total params: 4,253,864 Trainable params: 4,231,976 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown Just examining the above structure, several clues to neural network architecture become evident.Notice how some of the layers have zeros in their number of parameters. Items which are hyperparameters are always zero, nothing about that layer is learned. The other layers have learnable paramaters that are adjusted as training occurs. The layer types are all hyperparamaters, Keras will not change a convolution layer to a max pooling layer for you. However, the layers that have parameters are trained/adjusted by the traning algorithm. Most of the parameters seen above are the weights of the neural network.Some of the parameters are maked as non-trainable. These cannot be adjusted by the training algorithm. When we later use transfer learning with this model we will strip off the final layers that classify 1000 items and replace with our 3 dog breed classification layer. Only our new layers will be trainable, we will mark the existing layers as non-trainable.The Relu activation function is used throught the neural network. Also batch and dropout normalization are used. We cannot see the percent used for batch normalization, that might be specified in the origional paper. Many deep neural networks are pyramid shaped, and this is the case for this one. This neural network uses and expanding pyramid shape as you can see the neuron/filter counts expand from 32 to 64 to 128 to 256 to 512 and max out at 1,024.We will now use the MobileNet to classify several image URL's below. You can add additional URL's of your own to see how well the MobileNet can classify. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://cdn.shopify.com/s/files/1/0712/4751/products/SMA-01_2000x.jpg?v=1537468751", "https://farm2.static.flickr.com/1394/967537586_87b1358ad3.jpg", "https://sites.wustl.edu/jeffheaton/files/2016/07/jheaton_wustl1-262izm5-458x458.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/"\ "TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) lst = decode_predictions(pred, top=5) for itm in lst[0]: print(itm) ###Output _____no_output_____ ###Markdown Overall, the neural network is doing quite well. However, it does not classify me as a "person", rather I am classified as a "suit". Similarly, my English Bulldog Hickory is classiified as a "pug". This is likely because I am only providiing a closeup of his face. For many applications, MobileNet might be entirely acceptable as an image classifier. However, if you need to classify very specialized images that are not in the 1,000 image types supported by imagenet, it is necessary to use transfer learning. TransferIt is possable to create your own image classification network from scratch. This would take considerable time and resources. Just creating a dog breed classifier would require many pictures of dogs, labeled by breed. By using a pretrained neural network, you are tapping into knowldge already built into the lower layaers of the nerual network. The transferred layers likely already have some notion of eyes, ears, feet, and fur. These lower level concepts help to train the neural network to identify dog breeds.Next we reload the MobileNet; however, this time we set the *include_top* parameter to *False*. This instructs Keras to not load the final classification layers. This is the common mode of operation for transfer learning. We display a summary to see that the top classification layer is now missing. ###Code base_model=MobileNet(weights='imagenet',include_top=False) #imports the mobilenet model and discards the last 1000 neuron layer. base_model.summary() ###Output C:\Users\jheaton\Miniconda3\envs\tensorflow\lib\site-packages\keras_applications\mobilenet.py:207: UserWarning: `input_shape` is undefined or non-square, or `rows` is not in [128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default. warnings.warn('`input_shape` is undefined or non-square, ' ###Markdown We will add new top layers to the neural network. Our final SoftMax layer includes support for 3 classes. ###Code x=base_model.output x=GlobalAveragePooling2D()(x) x=Dense(1024,activation='relu')(x) x=Dense(1024,activation='relu')(x) preds=Dense(3,activation='softmax')(x) ###Output _____no_output_____ ###Markdown Next, we mark the origional MobileNet layers as non-trainable and our new layers as trainable. ###Code model=Model(inputs=base_model.input,outputs=preds) for layer in model.layers[:20]: layer.trainable=False for layer in model.layers[20:]: layer.trainable=True ###Output _____no_output_____ ###Markdown To train the neural network we must create a directory structure to hold the images. The Keras command **flow_from_directory** performs this for us. It requires that a folder be laid out as follows:imageEach class is a folder that contains images of that class. We can also specify a target size, in this case the origional MobileNet size of 224x224 is desired. ###Code if COLAB: PATH = "" else: PATH = "./data/transfer" train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) train_generator=train_datagen.flow_from_directory('c:\\jth\\data\\trans', target_size=(224,224), color_mode='rgb', batch_size=1, class_mode='categorical', shuffle=True) ###Output _____no_output_____ ###Markdown We are now ready to compile and fit the neural network. Notice we are using **fit_generator** rather than **fit**. This is because we are using the convienent **ImageDataGenerator**. ###Code model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy']) step_size_train=train_generator.n//train_generator.batch_size model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train, epochs=50) ###Output _____no_output_____ ###Markdown We are now ready to see how our new model can predict dog breeds. The URLs in the code below provide several example dogs to look at. Feel free to add your own. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a8/02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg/2560px-02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg", "https://upload.wikimedia.org/wikipedia/commons/5/51/DSHwiki.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Axel%2C_the_English_Bulldog.jpg/440px-Axel%2C_the_English_Bulldog.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg", "https://thehappypuppysite.com/wp-content/uploads/2017/12/poodle1.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Pudel_Grossschwarz.jpg/440px-Pudel_Grossschwarz.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 9: Regularization: L1, L2 and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 9 Material* Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=WLlP6S-Z8Xs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_1_keras_transfer.ipynb)* Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=ctVA1_46YEE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_2_popular_transfer.ipynb)* **Part 9.3: Transfer Learning for Computer Vision and Keras** [[Video]](https://www.youtube.com/watch?v=61vMUm_XBMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_3_transfer_cv.ipynb)* Part 9.4: Transfer Learning for Languages and Keras [[Video]](https://www.youtube.com/watch?v=ajmAAg9FxXA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_4_transfer_nlp.ipynb)* Part 9.5: Transfer Learning for Keras Feature Engineering [[Video]](https://www.youtube.com/watch?v=Dttxsm8zpL8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_5_transfer_feature_eng.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code # Start CoLab try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 9.3: Transfer Learning for Computer Vision and KerasIn this part, we will use transfer learning to create a simple neural network that can recognize dog breeds. To keep the example simple, we will only train for a handful of breeds. You can find a much more advanced form of this model at the [Microsoft Dog Breed Image Search](https://www.bing.com/visualsearch/Microsoft/WhatDog).To keep computation times to a minimum, we will make use of the MobileNet included in Keras. We will begin by loading the entire MobileNet and seeing how well it classifies with several test images. MobileNet can classify 1,000 different images. We will ultimately extend it to classify image types that are not in its dataset, in this example, three dog breeds. However, we begin by classifying image types among those in MobileNet's original training set. Even though our test images were not in its training set, the loaded neural network should classify them. Just as before, the program instantiates two environments: one for training and one for evaluation. ###Code import pandas as pd import numpy as np import os import tensorflow.keras import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense,GlobalAveragePooling2D from tensorflow.keras.applications import MobileNet from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.mobilenet import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ###Output _____no_output_____ ###Markdown We begin by downloading weights for a MobileNet trained for the imagenet dataset, which will take some time to download the first time you train the network. ###Code model = MobileNet(weights='imagenet',include_top=True) ###Output Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.6/mobilenet_1_0_224_tf.h5 17227776/17225924 [==============================] - 1s 0us/step ###Markdown The loaded network is a Keras neural network. However, this is a neural network that a third party engineered on advanced hardware. Merely looking at the structure of an advanced state-of-the-art neural network can be educational. ###Code model.summary() ###Output Model: "mobilenet_1.00_224" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, 112, 112, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, 112, 112, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, 112, 112, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, 112, 112, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, 112, 112, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, 112, 112, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, 112, 112, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, 113, 113, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, 56, 56, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, 56, 56, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, 56, 56, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, 56, 56, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, 56, 56, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, 56, 56, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, 57, 57, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, 28, 28, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, 28, 28, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, 28, 28, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, 28, 28, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, 28, 28, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, 28, 28, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, 29, 29, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, 14, 14, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, 14, 14, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, 14, 14, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, 14, 14, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, 15, 15, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, 7, 7, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, 7, 7, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, 7, 7, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, 7, 7, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, 7, 7, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, 7, 7, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 1024) 0 _________________________________________________________________ reshape_1 (Reshape) (None, 1, 1, 1024) 0 _________________________________________________________________ dropout (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ conv_preds (Conv2D) (None, 1, 1, 1000) 1025000 _________________________________________________________________ reshape_2 (Reshape) (None, 1000) 0 _________________________________________________________________ act_softmax (Activation) (None, 1000) 0 ================================================================= Total params: 4,253,864 Trainable params: 4,231,976 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown Just examining the above structure, several clues to neural network architecture become evident.Notice how some of the layers have zeros in their number of parameters. The summary always displays hyperparameters as zero. The neural network fitting process does not change hyperparameters. The other layers have learnable parameters that are adjusted as training occurs. The layer types are all hyperparameters; Keras will not change a convolution layer to a max-pooling layer. However, the layers that have parameters are trained/adjusted by the training algorithm. Most of the parameters seen above are the weights of the neural network.The programmer can configure some of the parameters as non-trainable. The training algorithm cannot adjust these. When we later use transfer learning with this model, we will strip off the final layers that classify 1000 items and replace them with our three dog breed classification layer. Only our new layers will be trainable; we will mark the existing layers as non-trainable.This neural network makes extensive use of the Relu activation function. Relu is a common choice for activation functions. Also, the neural network makes use of batch and dropout normalization. Many deep neural networks are pyramid-shaped, and this is the case for this one. This neural network uses and expanding pyramid shape as you can see the neuron/filter counts grow from 32 to 64 to 128 to 256 to 512 and max out at 1,024.We will now use the MobileNet to classify several image URL's below. You can add additional URL's of your own to see how well the MobileNet can classify. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://cdn.shopify.com/s/files/1/0712/4751/products/SMA-01_2000x.jpg?v=1537468751", "https://farm2.static.flickr.com/1394/967537586_87b1358ad3.jpg", "https://sites.wustl.edu/jeffheaton/files/2016/07/jheaton_wustl1-262izm5-458x458.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/"\ "TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) lst = decode_predictions(pred, top=5) for itm in lst[0]: print(itm) ###Output _____no_output_____ ###Markdown Overall, the neural network is doing quite well. However, it does not classify me as a "person"; instead, it classifies me as a "suit." Similarly, it incorrectly classifies my English Bulldog Hickory as a "pug". My dog's mistaken classification might be forgivable, as I am only providing a closeup of his face.For many applications, MobileNet might be entirely acceptable as an image classifier. However, if you need to classify very specialized images not in the 1,000 image types supported by imagenet, it is necessary to use transfer learning. TransferIt is possible to create your image classification network from scratch. This endeavor would take considerable time and resources. Just creating a dog breed classifier would require many pictures of dogs, labeled by breed. Using a pretrained neural network, you are tapping into knowledge already built into the lower layers of the neural network. The transferred layers likely already have some notion of eyes, ears, feet, and fur. These lower-level concepts help to train the neural network to identify dog breeds.Next, we reload the MobileNet; however, we set the *include_top* parameter to *False*. This setting instructs Keras not to load the final classification layers. This setting is the common mode of operation for transfer learning. We display a summary to see that the top classification layer is now missing. ###Code base_model=MobileNet(weights='imagenet',include_top=False) #imports the mobilenet model and discards the last 1000 neuron layer. base_model.summary() ###Output C:\Users\jheaton\Miniconda3\envs\tensorflow\lib\site-packages\keras_applications\mobilenet.py:207: UserWarning: `input_shape` is undefined or non-square, or `rows` is not in [128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default. warnings.warn('`input_shape` is undefined or non-square, ' ###Markdown We will add new top layers to the neural network. Our final SoftMax layer includes support for 3 classes. ###Code x=base_model.output x=GlobalAveragePooling2D()(x) x=Dense(1024,activation='relu')(x) x=Dense(1024,activation='relu')(x) preds=Dense(3,activation='softmax')(x) ###Output _____no_output_____ ###Markdown Next, we mark the original MobileNet layers as non-trainable and our new layers as trainable. ###Code model=Model(inputs=base_model.input,outputs=preds) for layer in model.layers[:20]: layer.trainable=False for layer in model.layers[20:]: layer.trainable=True ###Output _____no_output_____ ###Markdown To train the neural network, we must create a directory structure to hold the images. The Keras command **flow_from_directory** performs this for us. It requires that a folder be laid out as follows. Each class is a folder that contains images of that class. We can also specify a target size; in this case the original MobileNet size of 224x224 is desired. ###Code if COLAB: PATH = "" else: PATH = "./data/transfer" train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) train_generator=train_datagen.flow_from_directory('c:\\jth\\data\\trans', target_size=(224,224), color_mode='rgb', batch_size=1, class_mode='categorical', shuffle=True) ###Output _____no_output_____ ###Markdown We are now ready to compile and fit the neural network. Notice we are using **fit_generator** rather than **fit**. This choice is because we are using the convenient **ImageDataGenerator**. ###Code model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy']) step_size_train=train_generator.n//train_generator.batch_size model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train, epochs=50) ###Output _____no_output_____ ###Markdown We are now ready to see how our new model can predict dog breeds. The URLs in the code below provide several example dogs. Feel free to add your own. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a8/02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg/2560px-02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg", "https://upload.wikimedia.org/wikipedia/commons/5/51/DSHwiki.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Axel%2C_the_English_Bulldog.jpg/440px-Axel%2C_the_English_Bulldog.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg", "https://thehappypuppysite.com/wp-content/uploads/2017/12/poodle1.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Pudel_Grossschwarz.jpg/440px-Pudel_Grossschwarz.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 9: Transfer Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 9 Material* Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=WLlP6S-Z8Xs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_1_keras_transfer.ipynb)* Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=ctVA1_46YEE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_2_popular_transfer.ipynb)* **Part 9.3: Transfer Learning for Computer Vision and Keras** [[Video]](https://www.youtube.com/watch?v=61vMUm_XBMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_3_transfer_cv.ipynb)* Part 9.4: Transfer Learning for Languages and Keras [[Video]](https://www.youtube.com/watch?v=ajmAAg9FxXA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_4_transfer_nlp.ipynb)* Part 9.5: Transfer Learning for Keras Feature Engineering [[Video]](https://www.youtube.com/watch?v=Dttxsm8zpL8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_5_transfer_feature_eng.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code # Start CoLab try: from google.colab import drive %tensorflow_version 2.x drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Mounted at /content/drive Note: using Google CoLab ###Markdown Part 9.3: Transfer Learning for Computer Vision and KerasIn this part, we will use transfer learning to create a simple neural network that can new images that are not in MobileNet. To keep the example simple, we will only train for older storage technologies, such as floppy disks, tapes, etc. This dataset can be downloaded from the following location: [download link](https://data.heatonresearch.com/data/t81-558/images/trans.zip).To keep computation times to a minimum, we will make use of the MobileNet included in Keras. We will begin by loading the entire MobileNet and seeing how well it classifies with several test images. MobileNet can classify 1,000 different images. We will ultimately extend it to classify image types that are not in its dataset, in this example, four media types. ###Code import pandas as pd import numpy as np import os import tensorflow.keras import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense,GlobalAveragePooling2D from tensorflow.keras.applications import MobileNet from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.mobilenet import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ###Output _____no_output_____ ###Markdown We begin by downloading weights for a MobileNet trained for the imagenet dataset, which will take some time to download the first time you train the network. ###Code model = MobileNet(weights='imagenet',include_top=True) ###Output _____no_output_____ ###Markdown The loaded network is a Keras neural network. However, this is a neural network that a third party engineered on advanced hardware. Merely looking at the structure of an advanced state-of-the-art neural network can be educational. ###Code model.summary() ###Output Model: "mobilenet_1.00_224" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, 112, 112, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, 112, 112, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, 112, 112, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, 112, 112, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, 112, 112, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, 112, 112, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, 112, 112, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, 113, 113, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, 56, 56, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, 56, 56, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, 56, 56, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, 56, 56, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, 56, 56, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, 56, 56, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, 57, 57, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, 28, 28, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, 28, 28, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, 28, 28, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, 28, 28, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, 28, 28, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, 28, 28, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, 29, 29, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, 14, 14, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, 14, 14, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, 14, 14, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, 14, 14, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, 15, 15, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, 7, 7, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, 7, 7, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, 7, 7, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, 7, 7, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, 7, 7, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, 7, 7, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 1024) 0 _________________________________________________________________ reshape_1 (Reshape) (None, 1, 1, 1024) 0 _________________________________________________________________ dropout (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ conv_preds (Conv2D) (None, 1, 1, 1000) 1025000 _________________________________________________________________ reshape_2 (Reshape) (None, 1000) 0 _________________________________________________________________ predictions (Activation) (None, 1000) 0 ================================================================= Total params: 4,253,864 Trainable params: 4,231,976 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown Just examining the above structure, several clues to neural network architecture become evident.Notice how some of the layers have zeros in their number of parameters. The summary always displays hyperparameters as zero. The neural network fitting process does not change hyperparameters. The other layers have learnable parameters that are adjusted as training occurs. The layer types are all hyperparameters; Keras will not change a convolution layer to a max-pooling layer. However, the layers that have parameters are trained/adjusted by the training algorithm. Most of the parameters seen above are the weights of the neural network.The programmer can configure some of the parameters as non-trainable. The training algorithm cannot adjust these. When we later use transfer learning with this model, we will strip off the final layers that classify 1000 items and replace them with our four media types. Only our new layers will be trainable; we will mark the existing layers as non-trainable.This neural network makes extensive use of the Relu activation function. Relu is a common choice for activation functions. Also, the neural network makes use of batch and dropout normalization. Many deep neural networks are pyramid-shaped, and this is the case for this one. This neural network uses and expanding pyramid shape as you can see the neuron/filter counts grow from 32 to 64 to 128 to 256 to 512 and max out at 1,024.We will now use the MobileNet to classify several image URL's below. You can add additional URL's of your own to see how well the MobileNet can classify. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 ROOT = "https://data.heatonresearch.com/data/t81-558/images/" def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img L = "_________________________________________________________" def classify_array(images): for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) x = x[:,:,:,:3] # maybe an alpha channel pred = model.predict(x) display(L) display(img) print(np.argmax(pred,axis=1)) lst = decode_predictions(pred, top=5) for itm in lst[0]: print(itm) classify_array( [ ROOT+"soccer_ball.jpg", ROOT+"race_truck.jpg" ]) ###Output _____no_output_____ ###Markdown Overall, the neural network is doing quite well. For many applications, MobileNet might be entirely acceptable as an image classifier. However, if you need to classify very specialized images not in the 1,000 image types supported by imagenet, it is necessary to use transfer learning. TransferIt is possible to create your image classification network from scratch. This endeavor would take considerable time and resources. Just creating an image classifier would require many labeled pictures. Using a pretrained neural network, you are tapping into knowledge already built into the lower layers of the neural network. The transferred layers likely already have some notion of eyes, ears, feet, and fur. These lower-level concepts help to train the neural network to identify these images.Next, we reload the MobileNet; however, we set the *include_top* parameter to *False*. This setting instructs Keras not to load the final classification layers. This setting is the common mode of operation for transfer learning. We display a summary to see that the top classification layer is now missing. ###Code base_model=MobileNet(weights='imagenet',include_top=False) #imports the mobilenet model and discards the last 1000 neuron layer. base_model.summary() ###Output WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not in [128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default. Model: "mobilenet_1.00_224" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, None, None, 3)] 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, None, None, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, None, None, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, None, None, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, None, None, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, None, None, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, None, None, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, None, None, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, None, None, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, None, None, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, None, None, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, None, None, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, None, None, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, None, None, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, None, None, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, None, None, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, None, None, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, None, None, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, None, None, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, None, None, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, None, None, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, None, None, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, None, None, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, None, None, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, None, None, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, None, None, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, None, None, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, None, None, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, None, None, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, None, None, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, None, None, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, None, None, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, None, None, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, None, None, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, None, None, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, None, None, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, None, None, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, None, None, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, None, None, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, None, None, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, None, None, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, None, None, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, None, None, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, None, None, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, None, None, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, None, None, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, None, None, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, None, None, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, None, None, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, None, None, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, None, None, 1024) 0 ================================================================= Total params: 3,228,864 Trainable params: 3,206,976 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown We will add new top layers to the neural network. Our final SoftMax layer includes support for 3 classes. ###Code x=base_model.output x=GlobalAveragePooling2D()(x) x=Dense(1024,activation='relu')(x) x=Dense(1024,activation='relu')(x) preds=Dense(4,activation='softmax')(x) ###Output _____no_output_____ ###Markdown Next, we mark the original MobileNet layers as non-trainable and our new layers as trainable. ###Code model=Model(inputs=base_model.input,outputs=preds) for layer in model.layers[:20]: layer.trainable=False for layer in model.layers[20:]: layer.trainable=True ###Output _____no_output_____ ###Markdown To train the neural network, we must create a directory structure to hold the images. The Keras command **flow_from_directory** performs this for us. It requires that a folder be laid out as follows. Each class is a folder that contains images of that class. We can also specify a target size; in this case the original MobileNet size of 224x224 is desired.For this simple example I included four classes, my directories are setup as follows:* **trans** - The root directory of the dataset.* **trans/cd** - Pictures of CD's.* **trans/disk35** - Pictures of 3.5 inch disks.* **trans/disk525** - Pictures of 5.25 inch disks.* **trans/tape** - Pictures of tapes. ###Code if COLAB: PATH = "/content/drive/My Drive/projects/trans/" else: PATH = 'c:\\jth\\data\\trans' train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) train_generator=train_datagen.flow_from_directory(PATH, target_size=(224,224), color_mode='rgb', batch_size=52, class_mode='categorical', shuffle=True) ###Output Found 52 images belonging to 4 classes. ###Markdown We are now ready to compile and fit the neural network. ###Code model.compile(optimizer='Adam',loss='categorical_crossentropy', metrics=['accuracy']) step_size_train=train_generator.n//train_generator.batch_size model.fit(train_generator, steps_per_epoch=step_size_train, epochs=50) ###Output Epoch 1/50 1/1 [==============================] - 0s 2ms/step - loss: 1.6215 - accuracy: 0.2885 Epoch 2/50 1/1 [==============================] - 0s 1ms/step - loss: 2.0469 - accuracy: 0.4808 Epoch 3/50 1/1 [==============================] - 0s 1ms/step - loss: 0.0366 - accuracy: 1.0000 Epoch 4/50 1/1 [==============================] - 0s 1ms/step - loss: 0.9929 - accuracy: 0.6923 Epoch 5/50 1/1 [==============================] - 0s 2ms/step - loss: 0.0829 - accuracy: 0.9808 Epoch 6/50 1/1 [==============================] - 0s 1ms/step - loss: 0.0026 - accuracy: 1.0000 Epoch 7/50 1/1 [==============================] - 0s 1ms/step - loss: 8.4217e-04 - accuracy: 1.0000 Epoch 8/50 1/1 [==============================] - 0s 1ms/step - loss: 0.0011 - accuracy: 1.0000 Epoch 9/50 1/1 [==============================] - 0s 2ms/step - loss: 0.0018 - accuracy: 1.0000 Epoch 10/50 1/1 [==============================] - 0s 2ms/step - loss: 0.0024 - accuracy: 1.0000 Epoch 11/50 1/1 [==============================] - 0s 3ms/step - loss: 0.0024 - accuracy: 1.0000 Epoch 12/50 1/1 [==============================] - 0s 2ms/step - loss: 0.0018 - accuracy: 1.0000 Epoch 13/50 1/1 [==============================] - 0s 3ms/step - loss: 0.0010 - accuracy: 1.0000 Epoch 14/50 1/1 [==============================] - 0s 1ms/step - loss: 5.5196e-04 - accuracy: 1.0000 Epoch 15/50 1/1 [==============================] - 0s 2ms/step - loss: 2.9253e-04 - accuracy: 1.0000 Epoch 16/50 1/1 [==============================] - 0s 1ms/step - loss: 1.5706e-04 - accuracy: 1.0000 Epoch 17/50 1/1 [==============================] - 0s 2ms/step - loss: 8.5439e-05 - accuracy: 1.0000 Epoch 18/50 1/1 [==============================] - 0s 2ms/step - loss: 4.7495e-05 - accuracy: 1.0000 Epoch 19/50 1/1 [==============================] - 0s 3ms/step - loss: 2.7115e-05 - accuracy: 1.0000 Epoch 20/50 1/1 [==============================] - 0s 2ms/step - loss: 1.6015e-05 - accuracy: 1.0000 Epoch 21/50 1/1 [==============================] - 0s 2ms/step - loss: 9.8780e-06 - accuracy: 1.0000 Epoch 22/50 1/1 [==============================] - 0s 1ms/step - loss: 6.5151e-06 - accuracy: 1.0000 Epoch 23/50 1/1 [==============================] - 0s 1ms/step - loss: 4.7202e-06 - accuracy: 1.0000 Epoch 24/50 1/1 [==============================] - 0s 2ms/step - loss: 3.8330e-06 - accuracy: 1.0000 Epoch 25/50 1/1 [==============================] - 0s 1ms/step - loss: 3.5028e-06 - accuracy: 1.0000 Epoch 26/50 1/1 [==============================] - 0s 2ms/step - loss: 3.5234e-06 - accuracy: 1.0000 Epoch 27/50 1/1 [==============================] - 0s 2ms/step - loss: 3.7640e-06 - accuracy: 1.0000 Epoch 28/50 1/1 [==============================] - 0s 2ms/step - loss: 4.1491e-06 - accuracy: 1.0000 Epoch 29/50 1/1 [==============================] - 0s 1ms/step - loss: 4.6075e-06 - accuracy: 1.0000 Epoch 30/50 1/1 [==============================] - 0s 2ms/step - loss: 5.0750e-06 - accuracy: 1.0000 Epoch 31/50 1/1 [==============================] - 0s 4ms/step - loss: 5.5081e-06 - accuracy: 1.0000 Epoch 32/50 1/1 [==============================] - 0s 6ms/step - loss: 5.8794e-06 - accuracy: 1.0000 Epoch 33/50 1/1 [==============================] - 0s 2ms/step - loss: 6.1544e-06 - accuracy: 1.0000 Epoch 34/50 1/1 [==============================] - 0s 2ms/step - loss: 6.2942e-06 - accuracy: 1.0000 Epoch 35/50 1/1 [==============================] - 0s 1ms/step - loss: 6.3125e-06 - accuracy: 1.0000 Epoch 36/50 1/1 [==============================] - 0s 2ms/step - loss: 6.2140e-06 - accuracy: 1.0000 Epoch 37/50 1/1 [==============================] - 0s 1ms/step - loss: 5.9986e-06 - accuracy: 1.0000 Epoch 38/50 1/1 [==============================] - 0s 1ms/step - loss: 5.7075e-06 - accuracy: 1.0000 Epoch 39/50 1/1 [==============================] - 0s 1ms/step - loss: 5.3545e-06 - accuracy: 1.0000 Epoch 40/50 1/1 [==============================] - 0s 1ms/step - loss: 4.9878e-06 - accuracy: 1.0000 Epoch 41/50 1/1 [==============================] - 0s 2ms/step - loss: 4.6188e-06 - accuracy: 1.0000 Epoch 42/50 1/1 [==============================] - 0s 2ms/step - loss: 4.2498e-06 - accuracy: 1.0000 Epoch 43/50 1/1 [==============================] - 0s 2ms/step - loss: 3.8900e-06 - accuracy: 1.0000 Epoch 44/50 1/1 [==============================] - 0s 1ms/step - loss: 3.5439e-06 - accuracy: 1.0000 Epoch 45/50 1/1 [==============================] - 0s 1ms/step - loss: 3.2184e-06 - accuracy: 1.0000 Epoch 46/50 1/1 [==============================] - 0s 1ms/step - loss: 2.9181e-06 - accuracy: 1.0000 Epoch 47/50 1/1 [==============================] - 0s 3ms/step - loss: 2.6454e-06 - accuracy: 1.0000 Epoch 48/50 1/1 [==============================] - 0s 1ms/step - loss: 2.4001e-06 - accuracy: 1.0000 Epoch 49/50 1/1 [==============================] - 0s 2ms/step - loss: 2.1823e-06 - accuracy: 1.0000 Epoch 50/50 1/1 [==============================] - 0s 1ms/step - loss: 1.9898e-06 - accuracy: 1.0000 ###Markdown To make use of this neural network we will need to know which output neuron corrisponds to each of the training classes/directories we provided to the generator. By calling the **class_indices** property of the generator, we are provided with this information. ###Code print(train_generator.class_indices) ###Output {'cd': 0, 'disk35': 1, 'disk525': 2, 'tape': 3} ###Markdown We are now ready to see how our new model can predict our classes. The URLs in the code some examples. Feel free to add your own. We did not use a large dataset, so it will not be perfect. A larger training set will improve accuracy. ###Code %matplotlib inline def classify_array(images,classes): inv_map = {v: k for k, v in classes.items()} for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) x = x[:,:,:,:3] # maybe an alpha channel pred = model.predict(x) display(L) display(img) pred2 = int(np.argmax(pred,axis=1)) print(pred) print(inv_map[pred2]) #print(classes[pred2]) #print(pred[0]) classify_array( [ #ROOT+"disk_35.png", #ROOT+"disk_525.png", #ROOT+"disk_35b.png", #ROOT+"IMG_1563.jpg", #ROOT+"IMG_1565.jpg", ROOT+"IMG_1567.jpg", ROOT+"IMG_1570.jpg" ],train_generator.class_indices) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 9: Transfer Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 9 Material* Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=WLlP6S-Z8Xs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_1_keras_transfer.ipynb)* Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=ctVA1_46YEE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_2_popular_transfer.ipynb)* **Part 9.3: Transfer Learning for Computer Vision and Keras** [[Video]](https://www.youtube.com/watch?v=61vMUm_XBMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_3_transfer_cv.ipynb)* Part 9.4: Transfer Learning for Languages and Keras [[Video]](https://www.youtube.com/watch?v=ajmAAg9FxXA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_4_transfer_nlp.ipynb)* Part 9.5: Transfer Learning for Keras Feature Engineering [[Video]](https://www.youtube.com/watch?v=Dttxsm8zpL8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_5_transfer_feature_eng.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code # Start CoLab try: from google.colab import drive %tensorflow_version 2.x drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Mounted at /content/drive Note: using Google CoLab ###Markdown Part 9.3: Transfer Learning for Computer Vision and KerasIn this part, we will use transfer learning to create a simple neural network that can new images that are not in MobileNet. To keep the example simple, we will only train for older storage technologies, such as floppy disks, tapes, etc. This dataset can be downloaded from the following location: [download link](https://data.heatonresearch.com/data/t81-558/images/trans.zip).To keep computation times to a minimum, we will make use of the MobileNet included in Keras. We will begin by loading the entire MobileNet and seeing how well it classifies with several test images. MobileNet can classify 1,000 different images. We will ultimately extend it to classify image types that are not in its dataset, in this example, four media types. ###Code import pandas as pd import numpy as np import os import tensorflow.keras import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense,GlobalAveragePooling2D from tensorflow.keras.applications import MobileNet from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.mobilenet import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ###Output _____no_output_____ ###Markdown We begin by downloading weights for a MobileNet trained for the imagenet dataset, which will take some time to download the first time you train the network. ###Code model = MobileNet(weights='imagenet',include_top=True) ###Output _____no_output_____ ###Markdown The loaded network is a Keras neural network. However, this is a neural network that a third party engineered on advanced hardware. Merely looking at the structure of an advanced state-of-the-art neural network can be educational. ###Code model.summary() ###Output Model: "mobilenet_1.00_224" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, 112, 112, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, 112, 112, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, 112, 112, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, 112, 112, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, 112, 112, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, 112, 112, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, 112, 112, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, 113, 113, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, 56, 56, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, 56, 56, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, 56, 56, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, 56, 56, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, 56, 56, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, 56, 56, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, 57, 57, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, 28, 28, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, 28, 28, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, 28, 28, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, 28, 28, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, 28, 28, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, 28, 28, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, 29, 29, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, 14, 14, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, 14, 14, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, 14, 14, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, 14, 14, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, 15, 15, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, 7, 7, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, 7, 7, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, 7, 7, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, 7, 7, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, 7, 7, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, 7, 7, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 1024) 0 _________________________________________________________________ reshape_1 (Reshape) (None, 1, 1, 1024) 0 _________________________________________________________________ dropout (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ conv_preds (Conv2D) (None, 1, 1, 1000) 1025000 _________________________________________________________________ reshape_2 (Reshape) (None, 1000) 0 _________________________________________________________________ predictions (Activation) (None, 1000) 0 ================================================================= Total params: 4,253,864 Trainable params: 4,231,976 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown Just examining the above structure, several clues to neural network architecture become evident.Notice how some of the layers have zeros in their number of parameters. The summary always displays hyperparameters as zero. The neural network fitting process does not change hyperparameters. The other layers have learnable parameters that are adjusted as training occurs. The layer types are all hyperparameters; Keras will not change a convolution layer to a max-pooling layer. However, the layers that have parameters are trained/adjusted by the training algorithm. Most of the parameters seen above are the weights of the neural network.The programmer can configure some of the parameters as non-trainable. The training algorithm cannot adjust these. When we later use transfer learning with this model, we will strip off the final layers that classify 1000 items and replace them with our four media types. Only our new layers will be trainable; we will mark the existing layers as non-trainable.This neural network makes extensive use of the Relu activation function. Relu is a common choice for activation functions. Also, the neural network makes use of batch and dropout normalization. Many deep neural networks are pyramid-shaped, and this is the case for this one. This neural network uses and expanding pyramid shape as you can see the neuron/filter counts grow from 32 to 64 to 128 to 256 to 512 and max out at 1,024.We will now use the MobileNet to classify several image URL's below. You can add additional URL's of your own to see how well the MobileNet can classify. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 ROOT = "https://data.heatonresearch.com/data/t81-558/images/" def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img L = "_________________________________________________________" def classify_array(images): for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) x = x[:,:,:,:3] # maybe an alpha channel pred = model.predict(x) display(L) display(img) print(np.argmax(pred,axis=1)) lst = decode_predictions(pred, top=5) for itm in lst[0]: print(itm) classify_array( [ ROOT+"soccer_ball.jpg", ROOT+"race_truck.jpg" ]) ###Output _____no_output_____ ###Markdown Overall, the neural network is doing quite well. For many applications, MobileNet might be entirely acceptable as an image classifier. However, if you need to classify very specialized images not in the 1,000 image types supported by imagenet, it is necessary to use transfer learning. TransferIt is possible to create your image classification network from scratch. This endeavor would take considerable time and resources. Just creating an image classifier would require many labeled pictures. Using a pretrained neural network, you are tapping into knowledge already built into the lower layers of the neural network. The transferred layers likely already have some notion of eyes, ears, feet, and fur. These lower-level concepts help to train the neural network to identify these images.Next, we reload the MobileNet; however, we set the *include_top* parameter to *False*. This setting instructs Keras not to load the final classification layers. This setting is the common mode of operation for transfer learning. We display a summary to see that the top classification layer is now missing. ###Code base_model=MobileNet(weights='imagenet',include_top=False) #imports the mobilenet model and discards the last 1000 neuron layer. base_model.summary() ###Output WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not in [128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default. Model: "mobilenet_1.00_224" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, None, None, 3)] 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, None, None, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, None, None, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, None, None, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, None, None, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, None, None, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, None, None, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, None, None, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, None, None, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, None, None, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, None, None, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, None, None, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, None, None, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, None, None, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, None, None, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, None, None, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, None, None, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, None, None, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, None, None, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, None, None, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, None, None, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, None, None, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, None, None, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, None, None, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, None, None, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, None, None, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, None, None, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, None, None, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, None, None, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, None, None, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, None, None, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, None, None, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, None, None, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, None, None, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, None, None, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, None, None, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, None, None, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, None, None, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, None, None, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, None, None, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, None, None, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, None, None, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, None, None, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, None, None, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, None, None, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, None, None, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, None, None, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, None, None, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, None, None, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, None, None, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, None, None, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, None, None, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, None, None, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, None, None, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, None, None, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, None, None, 1024) 0 ================================================================= Total params: 3,228,864 Trainable params: 3,206,976 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown We will add new top layers to the neural network. Our final SoftMax layer includes support for 3 classes. ###Code x=base_model.output x=GlobalAveragePooling2D()(x) x=Dense(1024,activation='relu')(x) x=Dense(1024,activation='relu')(x) preds=Dense(4,activation='softmax')(x) ###Output _____no_output_____ ###Markdown Next, we mark the original MobileNet layers as non-trainable and our new layers as trainable. ###Code model=Model(inputs=base_model.input,outputs=preds) for layer in model.layers[:20]: layer.trainable=False for layer in model.layers[20:]: layer.trainable=True ###Output _____no_output_____ ###Markdown To train the neural network, we must create a directory structure to hold the images. The Keras command **flow_from_directory** performs this for us. It requires that a folder be laid out as follows. Each class is a folder that contains images of that class. We can also specify a target size; in this case the original MobileNet size of 224x224 is desired.For this simple example I included four classes, my directories are setup as follows:* **trans** - The root directory of the dataset.* **trans/cd** - Pictures of CD's.* **trans/disk35** - Pictures of 3.5 inch disks.* **trans/disk525** - Pictures of 5.25 inch disks.* **trans/tape** - Pictures of tapes. ###Code if COLAB: PATH = "/content/drive/My Drive/projects/trans/" else: PATH = 'c:\\jth\\data\\trans' train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) train_generator=train_datagen.flow_from_directory(PATH, target_size=(224,224), color_mode='rgb', batch_size=52, class_mode='categorical', shuffle=True) ###Output Found 52 images belonging to 4 classes. ###Markdown We are now ready to compile and fit the neural network. ###Code model.compile(optimizer='Adam',loss='categorical_crossentropy', metrics=['accuracy']) step_size_train=train_generator.n//train_generator.batch_size model.fit(train_generator, steps_per_epoch=step_size_train, epochs=50) ###Output Epoch 1/50 1/1 [==============================] - 0s 2ms/step - loss: 1.6215 - accuracy: 0.2885 Epoch 2/50 1/1 [==============================] - 0s 1ms/step - loss: 2.0469 - accuracy: 0.4808 Epoch 3/50 1/1 [==============================] - 0s 1ms/step - loss: 0.0366 - accuracy: 1.0000 Epoch 4/50 1/1 [==============================] - 0s 1ms/step - loss: 0.9929 - accuracy: 0.6923 Epoch 5/50 1/1 [==============================] - 0s 2ms/step - loss: 0.0829 - accuracy: 0.9808 Epoch 6/50 1/1 [==============================] - 0s 1ms/step - loss: 0.0026 - accuracy: 1.0000 Epoch 7/50 1/1 [==============================] - 0s 1ms/step - loss: 8.4217e-04 - accuracy: 1.0000 Epoch 8/50 1/1 [==============================] - 0s 1ms/step - loss: 0.0011 - accuracy: 1.0000 Epoch 9/50 1/1 [==============================] - 0s 2ms/step - loss: 0.0018 - accuracy: 1.0000 Epoch 10/50 1/1 [==============================] - 0s 2ms/step - loss: 0.0024 - accuracy: 1.0000 Epoch 11/50 1/1 [==============================] - 0s 3ms/step - loss: 0.0024 - accuracy: 1.0000 Epoch 12/50 1/1 [==============================] - 0s 2ms/step - loss: 0.0018 - accuracy: 1.0000 Epoch 13/50 1/1 [==============================] - 0s 3ms/step - loss: 0.0010 - accuracy: 1.0000 Epoch 14/50 1/1 [==============================] - 0s 1ms/step - loss: 5.5196e-04 - accuracy: 1.0000 Epoch 15/50 1/1 [==============================] - 0s 2ms/step - loss: 2.9253e-04 - accuracy: 1.0000 Epoch 16/50 1/1 [==============================] - 0s 1ms/step - loss: 1.5706e-04 - accuracy: 1.0000 Epoch 17/50 1/1 [==============================] - 0s 2ms/step - loss: 8.5439e-05 - accuracy: 1.0000 Epoch 18/50 1/1 [==============================] - 0s 2ms/step - loss: 4.7495e-05 - accuracy: 1.0000 Epoch 19/50 1/1 [==============================] - 0s 3ms/step - loss: 2.7115e-05 - accuracy: 1.0000 Epoch 20/50 1/1 [==============================] - 0s 2ms/step - loss: 1.6015e-05 - accuracy: 1.0000 Epoch 21/50 1/1 [==============================] - 0s 2ms/step - loss: 9.8780e-06 - accuracy: 1.0000 Epoch 22/50 1/1 [==============================] - 0s 1ms/step - loss: 6.5151e-06 - accuracy: 1.0000 Epoch 23/50 1/1 [==============================] - 0s 1ms/step - loss: 4.7202e-06 - accuracy: 1.0000 Epoch 24/50 1/1 [==============================] - 0s 2ms/step - loss: 3.8330e-06 - accuracy: 1.0000 Epoch 25/50 1/1 [==============================] - 0s 1ms/step - loss: 3.5028e-06 - accuracy: 1.0000 Epoch 26/50 1/1 [==============================] - 0s 2ms/step - loss: 3.5234e-06 - accuracy: 1.0000 Epoch 27/50 1/1 [==============================] - 0s 2ms/step - loss: 3.7640e-06 - accuracy: 1.0000 Epoch 28/50 1/1 [==============================] - 0s 2ms/step - loss: 4.1491e-06 - accuracy: 1.0000 Epoch 29/50 1/1 [==============================] - 0s 1ms/step - loss: 4.6075e-06 - accuracy: 1.0000 Epoch 30/50 1/1 [==============================] - 0s 2ms/step - loss: 5.0750e-06 - accuracy: 1.0000 Epoch 31/50 1/1 [==============================] - 0s 4ms/step - loss: 5.5081e-06 - accuracy: 1.0000 Epoch 32/50 1/1 [==============================] - 0s 6ms/step - loss: 5.8794e-06 - accuracy: 1.0000 Epoch 33/50 1/1 [==============================] - 0s 2ms/step - loss: 6.1544e-06 - accuracy: 1.0000 Epoch 34/50 1/1 [==============================] - 0s 2ms/step - loss: 6.2942e-06 - accuracy: 1.0000 Epoch 35/50 1/1 [==============================] - 0s 1ms/step - loss: 6.3125e-06 - accuracy: 1.0000 Epoch 36/50 1/1 [==============================] - 0s 2ms/step - loss: 6.2140e-06 - accuracy: 1.0000 Epoch 37/50 1/1 [==============================] - 0s 1ms/step - loss: 5.9986e-06 - accuracy: 1.0000 Epoch 38/50 1/1 [==============================] - 0s 1ms/step - loss: 5.7075e-06 - accuracy: 1.0000 Epoch 39/50 1/1 [==============================] - 0s 1ms/step - loss: 5.3545e-06 - accuracy: 1.0000 Epoch 40/50 1/1 [==============================] - 0s 1ms/step - loss: 4.9878e-06 - accuracy: 1.0000 Epoch 41/50 1/1 [==============================] - 0s 2ms/step - loss: 4.6188e-06 - accuracy: 1.0000 Epoch 42/50 1/1 [==============================] - 0s 2ms/step - loss: 4.2498e-06 - accuracy: 1.0000 Epoch 43/50 1/1 [==============================] - 0s 2ms/step - loss: 3.8900e-06 - accuracy: 1.0000 Epoch 44/50 1/1 [==============================] - 0s 1ms/step - loss: 3.5439e-06 - accuracy: 1.0000 Epoch 45/50 1/1 [==============================] - 0s 1ms/step - loss: 3.2184e-06 - accuracy: 1.0000 Epoch 46/50 1/1 [==============================] - 0s 1ms/step - loss: 2.9181e-06 - accuracy: 1.0000 Epoch 47/50 1/1 [==============================] - 0s 3ms/step - loss: 2.6454e-06 - accuracy: 1.0000 Epoch 48/50 1/1 [==============================] - 0s 1ms/step - loss: 2.4001e-06 - accuracy: 1.0000 Epoch 49/50 1/1 [==============================] - 0s 2ms/step - loss: 2.1823e-06 - accuracy: 1.0000 Epoch 50/50 1/1 [==============================] - 0s 1ms/step - loss: 1.9898e-06 - accuracy: 1.0000 ###Markdown To make use of this neural network we will need to know which output neuron corrisponds to each of the training classes/directories we provided to the generator. By calling the **class_indices** property of the generator, we are provided with this information. ###Code print(train_generator.class_indices) ###Output {'cd': 0, 'disk35': 1, 'disk525': 2, 'tape': 3} ###Markdown We are now ready to see how our new model can predict our classes. The URLs in the code some examples. Feel free to add your own. We did not use a large dataset, so it will not be perfect. A larger training set will improve accuracy. ###Code %matplotlib inline def classify_array(images,classes): inv_map = {v: k for k, v in classes.items()} for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) x = x[:,:,:,:3] # maybe an alpha channel pred = model.predict(x) display(L) display(img) pred2 = int(np.argmax(pred,axis=1)) print(pred) print(inv_map[pred2]) #print(classes[pred2]) #print(pred[0]) classify_array( [ #ROOT+"disk_35.png", #ROOT+"disk_525.png", #ROOT+"disk_35b.png", #ROOT+"IMG_1563.jpg", #ROOT+"IMG_1565.jpg", ROOT+"IMG_1567.jpg", ROOT+"IMG_1570.jpg" ],train_generator.class_indices) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 9: Regularization: L1, L2 and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 9 Material* Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=WLlP6S-Z8Xs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_1_keras_transfer.ipynb)* Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=ctVA1_46YEE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_2_popular_transfer.ipynb)* **Part 9.3: Transfer Learning for Computer Vision and Keras** [[Video]](https://www.youtube.com/watch?v=61vMUm_XBMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_3_transfer_cv.ipynb)* Part 9.4: Transfer Learning for Languages and Keras [[Video]](https://www.youtube.com/watch?v=ajmAAg9FxXA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_4_transfer_nlp.ipynb)* Part 9.5: Transfer Learning for Keras Feature Engineering [[Video]](https://www.youtube.com/watch?v=Dttxsm8zpL8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_5_transfer_feature_eng.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code # Start CoLab try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 9.3: Transfer Learning for Computer Vision and KerasIn this part we will make use of transfer learning to create a simple neural network that can recognize dog breeds. To keep the example simple, we will only train for a handfull of breeds. A much more advanced form of this model can be found at the [Microsoft Dog Breed Image Search](https://www.bing.com/visualsearch/Microsoft/WhatDog).To keep computation times to a minimum, we will make use of the MobileNet, which is built into Keras. We will begin by loading the entire MobileNet and seeing how well it classifies with several test images. MobileNet can classify 1,000 different images. We will ultimatly extend it to classify image types that are not in its dataset, in this example 3 dog breeds. However, we begin by classifying image types amoung those that it was trained on. Even though our test images were not in its training set, the loaded neural network should be able to classify them. ###Code import pandas as pd import numpy as np import os import tensorflow.keras import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense,GlobalAveragePooling2D from tensorflow.keras.applications import MobileNet from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.mobilenet import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ###Output _____no_output_____ ###Markdown We begin by downloading weights for a MobileNet trained for the imagenet dataset. This will take some time to download the first time you train the network. ###Code model = MobileNet(weights='imagenet',include_top=True) ###Output Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.6/mobilenet_1_0_224_tf.h5 17227776/17225924 [==============================] - 1s 0us/step ###Markdown The loaded network is a Keras neural network, just like those that we've been working with so far. However, this is a neural network that was trained/engineered on advanced hardware. Simply looking at the structure of an advanced state-of-the-art neural network can be educational. ###Code model.summary() ###Output Model: "mobilenet_1.00_224" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, 112, 112, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, 112, 112, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, 112, 112, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, 112, 112, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, 112, 112, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, 112, 112, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, 112, 112, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, 112, 112, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, 113, 113, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, 56, 56, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, 56, 56, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, 56, 56, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, 56, 56, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, 56, 56, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, 56, 56, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, 56, 56, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, 56, 56, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, 57, 57, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, 28, 28, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, 28, 28, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, 28, 28, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, 28, 28, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, 28, 28, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, 28, 28, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, 28, 28, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, 28, 28, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, 29, 29, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, 14, 14, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, 14, 14, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, 14, 14, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, 14, 14, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, 14, 14, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, 14, 14, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, 14, 14, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, 14, 14, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, 15, 15, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, 7, 7, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, 7, 7, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, 7, 7, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, 7, 7, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, 7, 7, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, 7, 7, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, 7, 7, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, 7, 7, 1024) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 1024) 0 _________________________________________________________________ reshape_1 (Reshape) (None, 1, 1, 1024) 0 _________________________________________________________________ dropout (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ conv_preds (Conv2D) (None, 1, 1, 1000) 1025000 _________________________________________________________________ reshape_2 (Reshape) (None, 1000) 0 _________________________________________________________________ act_softmax (Activation) (None, 1000) 0 ================================================================= Total params: 4,253,864 Trainable params: 4,231,976 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown Just examining the above structure, several clues to neural network architecture become evident.Notice how some of the layers have zeros in their number of parameters. Items which are hyperparameters are always zero, nothing about that layer is learned. The other layers have learnable paramaters that are adjusted as training occurs. The layer types are all hyperparamaters, Keras will not change a convolution layer to a max pooling layer for you. However, the layers that have parameters are trained/adjusted by the traning algorithm. Most of the parameters seen above are the weights of the neural network.Some of the parameters are maked as non-trainable. These cannot be adjusted by the training algorithm. When we later use transfer learning with this model we will strip off the final layers that classify 1000 items and replace with our 3 dog breed classification layer. Only our new layers will be trainable, we will mark the existing layers as non-trainable.The Relu activation function is used throught the neural network. Also batch and dropout normalization are used. We cannot see the percent used for batch normalization, that might be specified in the origional paper. Many deep neural networks are pyramid shaped, and this is the case for this one. This neural network uses and expanding pyramid shape as you can see the neuron/filter counts expand from 32 to 64 to 128 to 256 to 512 and max out at 1,024.We will now use the MobileNet to classify several image URL's below. You can add additional URL's of your own to see how well the MobileNet can classify. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://cdn.shopify.com/s/files/1/0712/4751/products/SMA-01_2000x.jpg?v=1537468751", "https://farm2.static.flickr.com/1394/967537586_87b1358ad3.jpg", "https://sites.wustl.edu/jeffheaton/files/2016/07/jheaton_wustl1-262izm5-458x458.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) lst = decode_predictions(pred, top=5) for itm in lst[0]: print(itm) ###Output _____no_output_____ ###Markdown Overall, the neural network is doing quite well. However, it does not classify me as a "person", rather I am classified as a "suit". Similarly, my English Bulldog Hickory is classiified as a "pug". This is likely because I am only providiing a closeup of his face. For many applications, MobileNet might be entirely acceptable as an image classifier. However, if you need to classify very specialized images that are not in the 1,000 image types supported by imagenet, it is necessary to use transfer learning. TransferIt is possable to create your own image classification network from scratch. This would take considerable time and resources. Just creating a dog breed classifier would require many pictures of dogs, labeled by breed. By using a pretrained neural network, you are tapping into knowldge already built into the lower layaers of the nerual network. The transferred layers likely already have some notion of eyes, ears, feet, and fur. These lower level concepts help to train the neural network to identify dog breeds.Next we reload the MobileNet; however, this time we set the *include_top* parameter to *False*. This instructs Keras to not load the final classification layers. This is the common mode of operation for transfer learning. We display a summary to see that the top classification layer is now missing. ###Code base_model=MobileNet(weights='imagenet',include_top=False) #imports the mobilenet model and discards the last 1000 neuron layer. base_model.summary() ###Output C:\Users\jheaton\Miniconda3\envs\tensorflow\lib\site-packages\keras_applications\mobilenet.py:207: UserWarning: `input_shape` is undefined or non-square, or `rows` is not in [128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default. warnings.warn('`input_shape` is undefined or non-square, ' ###Markdown We will add new top layers to the neural network. Our final SoftMax layer includes support for 3 classes. ###Code x=base_model.output x=GlobalAveragePooling2D()(x) x=Dense(1024,activation='relu')(x) x=Dense(1024,activation='relu')(x) preds=Dense(3,activation='softmax')(x) ###Output _____no_output_____ ###Markdown Next, we mark the origional MobileNet layers as non-trainable and our new layers as trainable. ###Code model=Model(inputs=base_model.input,outputs=preds) for layer in model.layers[:20]: layer.trainable=False for layer in model.layers[20:]: layer.trainable=True ###Output _____no_output_____ ###Markdown To train the neural network we must create a directory structure to hold the images. The Keras command **flow_from_directory** performs this for us. It requires that a folder be laid out as follows:imageEach class is a folder that contains images of that class. We can also specify a target size, in this case the origional MobileNet size of 224x224 is desired. ###Code if COLAB: PATH = "" else: PATH = "./data/transfer" train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) train_generator=train_datagen.flow_from_directory('c:\\jth\\data\\trans', target_size=(224,224), color_mode='rgb', batch_size=1, class_mode='categorical', shuffle=True) ###Output _____no_output_____ ###Markdown We are now ready to compile and fit the neural network. Notice we are using **fit_generator** rather than **fit**. This is because we are using the convienent **ImageDataGenerator**. ###Code model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy']) step_size_train=train_generator.n//train_generator.batch_size model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train, epochs=50) ###Output _____no_output_____ ###Markdown We are now ready to see how our new model can predict dog breeds. The URLs in the code below provide several example dogs to look at. Feel free to add your own. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a8/02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg/2560px-02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg", "https://upload.wikimedia.org/wikipedia/commons/5/51/DSHwiki.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Axel%2C_the_English_Bulldog.jpg/440px-Axel%2C_the_English_Bulldog.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg", "https://thehappypuppysite.com/wp-content/uploads/2017/12/poodle1.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Pudel_Grossschwarz.jpg/440px-Pudel_Grossschwarz.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 9: Regularization: L1, L2 and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 9 Material* Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=xyymDGReKdY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=26) [[Notebook]](t81_558_class_09_1_keras_transfer.ipynb)* Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=CEFcwpBneFo&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=27) [[Notebook]](t81_558_class_09_2_popular_transfer.ipynb)* **Part 9.3: Transfer Learning for Computer Vision and Keras** [[Video]](https://www.youtube.com/watch?v=JPqwyuK7bPg&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=28) [[Notebook]](t81_558_class_09_3_transfer_cv.ipynb)* Part 9.4: Transfer Learning for Languages and Keras [[Video]](https://www.youtube.com/watch?v=JPqwyuK7bPg&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=28) [[Notebook]](t81_558_class_09_4_transfer_nlp.ipynb)* Part 9.5: Transfer Learning for Keras Feature Engineering [[Video]](https://www.youtube.com/watch?v=JPqwyuK7bPg&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=28) [[Notebook]](t81_558_class_09_5_transfer_feature_eng.ipynb) Part 9.3: Transfer Learning for Computer Vision and KerasIn this part we will make use of transfer learning to create a simple neural network that can recognize dog breeds. To keep the example simple, we will only train for a handfull of breeds. A much more advanced form of this model can be found at the [Microsoft Dog Breed Image Search](https://www.bing.com/visualsearch/Microsoft/WhatDog).To keep computation times to a minimum, we will make use of the MobileNet, which is built into Keras. We will begin by loading the entire MobileNet and seeing how well it classifies with several test images. MobileNet can classify 1,000 different images. We will ultimatly extend it to classify image types that are not in its dataset, in this example 3 dog breeds. However, we begin by classifying image types amoung those that it was trained on. Even though our test images were not in its training set, the loaded neural network should be able to classify them. ###Code import pandas as pd import numpy as np import os import tensorflow.keras import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense,GlobalAveragePooling2D from tensorflow.keras.applications import MobileNet from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.mobilenet import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ###Output _____no_output_____ ###Markdown We begin by downloading weights for a MobileNet trained for the imagenet dataset. This will take some time to download the first time you train the network. ###Code model = MobileNet(weights='imagenet',include_top=True) ###Output _____no_output_____ ###Markdown The loaded network is a Keras neural network, just like those that we've been working with so far. However, this is a neural network that was trained/engineered on advanced hardware. Simply looking at the structure of an advanced state-of-the-art neural network can be educational. ###Code model.summary() ###Output _____no_output_____ ###Markdown Just examining the above structure, several clues to neural network architecture become evident.Notice how some of the layers have zeros in their number of parameters. Items which are hyperparameters are always zero, nothing about that layer is learned. The other layers have learnable paramaters that are adjusted as training occurs. The layer types are all hyperparamaters, Keras will not change a convolution layer to a max pooling layer for you. However, the layers that have parameters are trained/adjusted by the traning algorithm. Most of the parameters seen above are the weights of the neural network.Some of the parameters are maked as non-trainable. These cannot be adjusted by the training algorithm. When we later use transfer learning with this model we will strip off the final layers that classify 1000 items and replace with our 3 dog breed classification layer. Only our new layers will be trainable, we will mark the existing layers as non-trainable.The Relu activation function is used throught the neural network. Also batch and dropout normalization are used. We cannot see the percent used for batch normalization, that might be specified in the origional paper. Many deep neural networks are pyramid shaped, and this is the case for this one. This neural network uses and expanding pyramid shape as you can see the neuron/filter counts expand from 32 to 64 to 128 to 256 to 512 and max out at 1,024.We will now use the MobileNet to classify several image URL's below. You can add additional URL's of your own to see how well the MobileNet can classify. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://cdn.shopify.com/s/files/1/0712/4751/products/SMA-01_2000x.jpg?v=1537468751", "https://farm2.static.flickr.com/1394/967537586_87b1358ad3.jpg", "https://sites.wustl.edu/jeffheaton/files/2016/07/jheaton_wustl1-262izm5-458x458.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) lst = decode_predictions(pred, top=5) for itm in lst[0]: print(itm) ###Output _____no_output_____ ###Markdown Overall, the neural network is doing quite well. However, it does not classify me as a "person", rather I am classified as a "suit". Similarly, my English Bulldog Hickory is classiified as a "pug". This is likely because I am only providiing a closeup of his face. For many applications, MobileNet might be entirely acceptable as an image classifier. However, if you need to classify very specialized images that are not in the 1,000 image types supported by imagenet, it is necessary to use transfer learning. TransferIt is possable to create your own image classification network from scratch. This would take considerable time and resources. Just creating a dog breed classifier would require many pictures of dogs, labeled by breed. By using a pretrained neural network, you are tapping into knowldge already built into the lower layaers of the nerual network. The transferred layers likely already have some notion of eyes, ears, feet, and fur. These lower level concepts help to train the neural network to identify dog breeds.Next we reload the MobileNet; however, this time we set the *include_top* parameter to *False*. This instructs Keras to not load the final classification layers. This is the common mode of operation for transfer learning. We display a summary to see that the top classification layer is now missing. ###Code base_model=MobileNet(weights='imagenet',include_top=False) #imports the mobilenet model and discards the last 1000 neuron layer. base_model.summary() ###Output _____no_output_____ ###Markdown We will add new top layers to the neural network. Our final SoftMax layer includes support for 3 classes. ###Code x=base_model.output x=GlobalAveragePooling2D()(x) x=Dense(1024,activation='relu')(x) x=Dense(1024,activation='relu')(x) preds=Dense(3,activation='softmax')(x) ###Output _____no_output_____ ###Markdown Next, we mark the origional MobileNet layers as non-trainable and our new layers as trainable. ###Code model=Model(inputs=base_model.input,outputs=preds) for layer in model.layers[:20]: layer.trainable=False for layer in model.layers[20:]: layer.trainable=True ###Output _____no_output_____ ###Markdown To train the neural network we must create a directory structure to hold the images. The Keras command **flow_from_directory** performs this for us. It requires that a folder be laid out as follows:imageEach class is a folder that contains images of that class. We can also specify a target size, in this case the origional MobileNet size of 224x224 is desired. ###Code train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) train_generator=train_datagen.flow_from_directory('/Users/jheaton/Downloads/trans', target_size=(224,224), color_mode='rgb', batch_size=1, class_mode='categorical', shuffle=True) ###Output _____no_output_____ ###Markdown We are now ready to compile and fit the neural network. Notice we are using **fit_generator** rather than **fit**. This is because we are using the convienent **ImageDataGenerator**. ###Code model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy']) step_size_train=train_generator.n//train_generator.batch_size model.fit_generator(generator=train_generator, steps_per_epoch=step_size_train, epochs=50) ###Output _____no_output_____ ###Markdown We are now ready to see how our new model can predict dog breeds. The URLs in the code below provide several example dogs to look at. Feel free to add your own. ###Code %matplotlib inline from PIL import Image, ImageFile from matplotlib.pyplot import imshow import requests import numpy as np from io import BytesIO from IPython.display import display, HTML from tensorflow.keras.applications.mobilenet import decode_predictions IMAGE_WIDTH = 224 IMAGE_HEIGHT = 224 IMAGE_CHANNELS = 3 images = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a8/02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg/2560px-02.Owczarek_niemiecki_u%C5%BCytkowy_kr%C3%B3tkow%C5%82osy_suka.jpg", "https://upload.wikimedia.org/wikipedia/commons/5/51/DSHwiki.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Axel%2C_the_English_Bulldog.jpg/440px-Axel%2C_the_English_Bulldog.jpg", "https://1.bp.blogspot.com/-0vGbvWUrSAA/XP-OurPTA4I/AAAAAAAAgtg/TGx6YiGBEGIMjnViDjvVnYzYp__DJ6I-gCLcBGAs/s320/B%252Bt%2525aMbJQkm3Z50rqput%252BA.jpg", "https://thehappypuppysite.com/wp-content/uploads/2017/12/poodle1.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Pudel_Grossschwarz.jpg/440px-Pudel_Grossschwarz.jpg" ] def make_square(img): cols,rows = img.size if rows>cols: pad = (rows-cols)/2 img = img.crop((pad,0,cols,cols)) else: pad = (cols-rows)/2 img = img.crop((0,pad,rows,rows)) return img for url in images: x = [] ImageFile.LOAD_TRUNCATED_IMAGES = False response = requests.get(url) img = Image.open(BytesIO(response.content)) img.load() img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) display("___________________________________________________________________________________________") display(img) print(np.argmax(pred,axis=1)) ###Output _____no_output_____
nbs/13_learner.ipynb
###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) tfms = Cuda() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0) return DataBunch(train_dl, valid_dl) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run = 'learn',None,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" if self.run: getattr(self, event_name, noop)() @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output _____no_output_____ ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dbunch.device) def after_batch(self): "Update the iter counter (in training mode)" if not self.training: return self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None): store_attr(self, "with_input,with_loss,save_preds,save_targs") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = slice(3e-3) defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export class Learner(): def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True): store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dbunch.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dbunch, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dbunch.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dbunch.train_dl; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dbunch.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dbunch.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, act=None, **kwargs): cb = GatherPredsCallback(with_input=with_input, **kwargs) with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=0): dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dbunch, 'n_inp', -1) full_dec = self.dbunch.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs): if dl is None: dl = self.dbunch.dls[ds_idx] b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dbunch.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dbunch.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after=TrainEvalCallback def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): if self.training: test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dbunch.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dbunch.train_dl learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dbunch.valid_dl learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, beta=0.98): self.add_time,self.train_metrics = add_time,train_metrics self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self._valid_mets.attrgot('name') if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}') else: names = L('train_loss', 'valid_loss') + names[1:] if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return L(self.loss) + self.metrics def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,15.579856872558594,13.582958221435547,13.582958221435547,00:00] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dbunch.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dbunch.train_dl) test_eq(res[0], res[1]) x,y = learn.dbunch.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dbunch.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dbunch.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dbunch.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dbunch.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dbunch = DataBunch(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,4.590181350708008,3.600499153137207,00:00] (#4) [0,3.533226251602173,2.800215244293213,00:00] (#4) [0,2.748011350631714,2.179427146911621,00:00] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dbunch self.dbunch = self.dbunch.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dbunch = old_dbunch ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dbunch.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch # aug_preds.append(self.get_preds(dl=dl)[0][None]) aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds).mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_test.ipynb. Converted 01_core.foundation.ipynb. Converted 01a_core.utils.ipynb. Converted 01b_core.dispatch.ipynb. Converted 01c_core.transform.ipynb. Converted 02_core.script.ipynb. Converted 03_torch_core.ipynb. Converted 03a_layers.ipynb. Converted 04_data.load.ipynb. Converted 05_data.core.ipynb. Converted 06_data.transforms.ipynb. Converted 07_data.block.ipynb. Converted 08_vision.core.ipynb. Converted 09_vision.augment.ipynb. Converted 09a_vision.data.ipynb. Converted 09b_vision.utils.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.model.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 96_data.external.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) tfms = Cuda() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0) return DataBunch(train_dl, valid_dl) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output _____no_output_____ ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dbunch.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True): store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dbunch.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dbunch, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dbunch.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dbunch.train_dl; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dbunch.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dbunch.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None): dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dbunch, 'n_inp', -1) full_dec = self.dbunch.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs): if dl is None: dl = self.dbunch.dls[ds_idx] b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dbunch.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dbunch.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) learn.dbunch.device #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dbunch.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dbunch.train_dl learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dbunch.valid_dl learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,10.865579605102539,10.633462905883789,10.633462905883789,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dbunch.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dbunch.train_dl) test_eq(res[0], res[1]) x,y = learn.dbunch.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dbunch.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dbunch.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dbunch.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dbunch.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dbunch = DataBunch(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,21.60039710998535,23.18270492553711,'00:00'] (#4) [0,17.718852996826172,19.021663665771484,'00:00'] (#4) [0,14.590808868408203,15.608027458190918,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dbunch self.dbunch = self.dbunch.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dbunch = old_dbunch ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dbunch.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch # aug_preds.append(self.get_preds(dl=dl)[0][None]) aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds).mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.learner.ipynb. Converted 43_tabular.model.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. Converted migrating.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output _____no_output_____ ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dls.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dls, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dls,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dls.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dls, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dls.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dls, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dls.train; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None): dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dls, 'n_inp', -1) full_dec = self.dls.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=1, dl=None, max_n=9, shuffle=True, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffle=shuffle) b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dls.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dls.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dls` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) learn.dls.device #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dls.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dls.train learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dls.valid learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataLoaders`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,16.006254196166992,21.623294830322266,21.623295783996582,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dls.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dls.train) test_eq(res[0], res[1]) x,y = learn.dls.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dls.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dls.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dls.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dls.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dls = DataLoaders(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func =show BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `Datasets`/`DataLoaders` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(Datasets(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dls = DataLoaders(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output _____no_output_____ ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dls self.dls = self.dls.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dls = old_dbunch #export def load_learner(fname, cpu=True): "Load a `Learner` object in `fname`, optionally putting it on the `cpu`" res = torch.load(fname, map_location='cpu' if cpu else None) if hasattr(res, 'to_fp32'): res = res.to_fp32() if cpu: res.dls.cpu() return res ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25, use_max=False): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds) aug_preds = aug_preds.max(0)[0] if use_max else aug_preds.mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) if use_max: return torch.stack([preds, aug_preds], 0).max(0)[0] preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit def __setattr__(self, name, value): if hasattr(self.learn,name): warn(f"You are setting an attribute ({name}) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.{name}` if you would like to change it in the learner.") super().__setattr__(name, value) @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2. It also issues a warning that something is probably wrong: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output /home/sgugger/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16: UserWarning: You are setting an attribute (a) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.a` if you would like to change it in the learner. app.launch_new_instance() ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dls.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack sort_by_run # export class Learner(): def __init__(self, dls, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dls,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dls.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dls, 'path', Path('.')) self.add_cbs([(cb() if isinstance(cb, type) else cb) for cb in L(defaults.callbacks)+L(cbs)]) self.model.to(self.dls.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dls, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dls.train; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffled=False, drop_last=False) cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(dl=dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None): dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dls, 'n_inp', -1) full_dec = self.dls.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=1, dl=None, max_n=9, shuffle=True, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffle=shuffle) b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dls.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dls.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dls` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cbs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cbs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cbs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cbs=TestTrainEvalCallback, cuda=True) learn.fit(1) #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cbs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dls.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dls.train learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dls.valid learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataLoaders`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,13.54025650024414,7.539639472961426,7.539639711380005,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dls.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dls.train) test_eq(res[0], res[1]) x,y = learn.dls.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dls.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dls.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dls.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dls.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dls = DataLoaders(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `Datasets`/`DataLoaders` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(Datasets(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dls = DataLoaders(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cbs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cbs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,13.635464668273926,13.02572250366211,'00:00'] (#4) [0,11.34323501586914,10.841130256652832,'00:00'] (#4) [0,9.457951545715332,9.024518966674805,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dls self.dls = self.dls.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dls = old_dbunch #export def load_learner(fname, cpu=True): "Load a `Learner` object in `fname`, optionally putting it on the `cpu`" res = torch.load(fname, map_location='cpu' if cpu else None) if hasattr(res, 'to_fp32'): res = res.to_fp32() if cpu: res.dls.cpu() return res ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25, use_max=False): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds) aug_preds = aug_preds.max(0)[0] if use_max else aug_preds.mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) if use_max: return torch.stack([preds, aug_preds], 0).max(0)[0],targs preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) tfms = Cuda() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0) return DataBunch(train_dl, valid_dl) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run = 'learn',None,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" if self.run: getattr(self, event_name, noop)() @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output _____no_output_____ ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dbunch.device) def after_batch(self): "Update the iter counter (in training mode)" if not self.training: return self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None): store_attr(self, "with_input,with_loss,save_preds,save_targs") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = slice(3e-3) defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export class Learner(): def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True): store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dbunch.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dbunch, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dbunch.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dbunch.train_dl; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dbunch.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dbunch.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, act=None, **kwargs): cb = GatherPredsCallback(with_input=with_input, **kwargs) with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=0): dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dbunch, 'n_inp', -1) full_dec = self.dbunch.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs): if dl is None: dl = self.dbunch.dls[ds_idx] b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dbunch.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dbunch.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after=TrainEvalCallback def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): if self.training: test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dbunch.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dbunch.train_dl learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dbunch.valid_dl learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, beta=0.98): self.add_time,self.train_metrics = add_time,train_metrics self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self._valid_mets.attrgot('name') if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}') else: names = L('train_loss', 'valid_loss') + names[1:] if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return L(self.loss) + self.metrics def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,20.007387161254883,23.016841888427734,23.016841888427734,00:00] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dbunch.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dbunch.train_dl) test_eq(res[0], res[1]) x,y = learn.dbunch.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dbunch.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dbunch.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dbunch.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dbunch.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dbunch = DataBunch(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,9.796625137329102,10.653870582580566,00:00] (#4) [0,8.240012168884277,8.928829193115234,00:00] (#4) [0,6.8678107261657715,7.481707572937012,00:00] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dbunch self.dbunch = self.dbunch.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dbunch = old_dbunch ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dbunch.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch # aug_preds.append(self.get_preds(dl=dl)[0][None]) aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds).mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.model.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit def __setattr__(self, name, value): if hasattr(self.learn,name): warn(f"You are setting an attribute ({name}) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.{name}` if you would like to change it in the learner.") super().__setattr__(name, value) @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2. It also issues a warning that something is probably wrong: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output /home/sgugger/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16: UserWarning: You are setting an attribute (a) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.a` if you would like to change it in the learner. app.launch_new_instance() ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dls.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dls, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dls,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() if loss_func is None: loss_func = getattr(dls.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dls, 'path', Path('.')) self.add_cbs([(cb() if isinstance(cb, type) else cb) for cb in L(defaults.callbacks)+L(cbs)]) self.model.to(self.dls.device) if hasattr(self.model, 'reset'): self.model.reset() self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dls, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dls.train; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, inner=False, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffled=False, drop_last=False) cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(event.begin_epoch if inner else _before_epoch) self._do_epoch_validate(dl=dl) self(event.after_epoch if inner else _after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None, with_input=False): dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) dec = self.dls.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0] i = getattr(self.dls, 'n_inp', -1) dec_inp,dec_targ = map(detuplify, [dec[:i],dec[i:]]) res = dec_targ,dec_preds[0],preds[0] if with_input: res = (dec_inp,) + res return res def show_results(self, ds_idx=1, dl=None, max_n=9, shuffle=True, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffle=shuffle) b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dls.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dls.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dls` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cbs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cbs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cbs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cbs=TestTrainEvalCallback, cuda=True) learn.fit(1) #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cbs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dls.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dls.train learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dls.valid learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataLoaders`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,26.88559341430664,25.375843048095703,25.375842094421387,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dls.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dls.train) test_eq(res[0], res[1]) x,y = learn.dls.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dls.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dls.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dls.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dls.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dls = DataLoaders(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) tst = learn.get_preds(ds_idx=0, with_input=True, with_decoded=True) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `Datasets`/`DataLoaders` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(Datasets(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dls = DataLoaders(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(inp), [full_dec,dec,out]) test_eq(learn.predict(inp, with_input=True), [inp,full_dec,dec,out]) #export class FetchPreds(Callback): "A callback to fetch predictions during the training loop" def __init__(self, ds_idx=1, dl=None, with_input=False, with_decoded=False): store_attr(self, 'ds_idx,dl,with_input,with_decoded') def after_validate(self): learn,rec = self.learn,self.learn.recorder learn.remove_cbs([self,rec]) self.preds = learn.get_preds(ds_idx=self.ds_idx, dl=self.dl, with_input=self.with_input, with_decoded=self.with_decoded, inner=True) learn.add_cbs([self, rec]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cbs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cbs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,7.853846549987793,6.445760726928711,'00:00'] (#4) [0,6.233814239501953,5.162293434143066,'00:00'] (#4) [0,5.032419681549072,4.134268760681152,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dls self.dls = self.dls.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dls = old_dbunch #export def load_learner(fname, cpu=True): "Load a `Learner` object in `fname`, optionally putting it on the `cpu`" res = torch.load(fname, map_location='cpu' if cpu else None) if hasattr(res, 'to_fp32'): res = res.to_fp32() if cpu: res.dls.cpu() return res ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25, use_max=False): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch aug_preds.append(self.get_preds(ds_idx, inner=True)[0][None]) aug_preds = torch.cat(aug_preds) aug_preds = aug_preds.max(0)[0] if use_max else aug_preds.mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx, inner=True) if use_max: return torch.stack([preds, aug_preds], 0).max(0)[0],targs preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() def __setattr__(self, name, value): if hasattr(self.learn,name): warn(f"You are setting an attribute ({name}) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.{name}` if you would like to change it in the learner.") super().__setattr__(name, value) @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output /home/tako/.local/lib/python3.6/site-packages/ipykernel_launcher.py:15: UserWarning: You are setting an attribute (a) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.a` if you would like to change it in the learner. from ipykernel import kernelapp as app ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dls.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dls, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dls,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dls.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dls, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dls.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dls, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dls.train; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffled=False, drop_last=False) cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(dl=dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None): dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dls, 'n_inp', -1) full_dec = self.dls.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=1, dl=None, max_n=9, shuffle=True, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffle=shuffle) b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dls.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dls.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dls` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) learn.dls.device #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dls.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dls.train learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dls.valid learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataLoaders`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,8.058332443237305,6.8562912940979,6.8562912940979,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dls.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dls.train) test_eq(res[0], res[1]) x,y = learn.dls.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dls.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dls.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dls.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dls.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dls = DataLoaders(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `Datasets`/`DataLoaders` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(Datasets(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dls = DataLoaders(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,8.095123291015625,8.085640907287598,'00:00'] (#4) [0,6.929747581481934,6.923874378204346,'00:00'] (#4) [0,5.954098701477051,5.9301323890686035,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dls self.dls = self.dls.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dls = old_dbunch #export def load_learner(fname, cpu=True): "Load a `Learner` object in `fname`, optionally putting it on the `cpu`" res = torch.load(fname, map_location='cpu' if cpu else None) if hasattr(res, 'to_fp32'): res = res.to_fp32() if cpu: res.dls.cpu() return res ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25, use_max=False): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds) aug_preds = aug_preds.max(0)[0] if use_max else aug_preds.mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) if use_max: return torch.stack([preds, aug_preds], 0).max(0)[0] preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataBunch(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output _____no_output_____ ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dbunch.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dbunch.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dbunch, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dbunch.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dbunch.train_dl; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dbunch.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dbunch.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None): dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dbunch, 'n_inp', -1) full_dec = self.dbunch.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs): if dl is None: dl = self.dbunch.dls[ds_idx] b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dbunch.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dbunch.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) learn.dbunch.device #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dbunch.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dbunch.train_dl learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dbunch.valid_dl learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,4.121094226837158,3.5807554721832275,3.5807554721832275,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dbunch.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dbunch.train_dl) test_eq(res[0], res[1]) x,y = learn.dbunch.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dbunch.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dbunch.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dbunch.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dbunch.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dbunch = DataBunch(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,18.543609619140625,15.161527633666992,'00:00'] (#4) [0,14.167892456054688,11.612319946289062,'00:00'] (#4) [0,10.885078430175781,8.89686393737793,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dbunch self.dbunch = self.dbunch.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dbunch = old_dbunch ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dbunch.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch # aug_preds.append(self.get_preds(dl=dl)[0][None]) aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds).mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.learner.ipynb. Converted 43_tabular.model.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. Converted migrating.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataBunch(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output _____no_output_____ ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dbunch.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dbunch.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dbunch, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dbunch.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dbunch.train_dl; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dbunch.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dbunch.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None): dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dbunch, 'n_inp', -1) full_dec = self.dbunch.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs): if dl is None: dl = self.dbunch.dls[ds_idx] b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dbunch.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dbunch.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) learn.dbunch.device #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dbunch.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dbunch.train_dl learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dbunch.valid_dl learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,26.797739028930664,28.453933715820312,28.45393466949463,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dbunch.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dbunch.train_dl) test_eq(res[0], res[1]) x,y = learn.dbunch.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dbunch.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dbunch.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dbunch.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dbunch.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dbunch = DataBunch(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,18.03009033203125,11.90150260925293,'00:00'] (#4) [0,14.469986915588379,9.526132583618164,'00:00'] (#4) [0,11.554342269897461,7.626104354858398,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dbunch self.dbunch = self.dbunch.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dbunch = old_dbunch #export def load_learner(fname, cpu=True): "Load a `Learner` object in `fname`, optionally putting it on the `cpu`" res = torch.load(fname, map_location='cpu' if cpu else None) if hasattr(res, 'to_fp32'): res = res.to_fp32() if cpu: res.dbunch.cpu() return res ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dbunch.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch # aug_preds.append(self.get_preds(dl=dl)[0][None]) aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds).mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.learner.ipynb. Converted 43_tabular.model.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output _____no_output_____ ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dls.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dls, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dls,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dls.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dls, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dls.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dls, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dls.train; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None): dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dls, 'n_inp', -1) full_dec = self.dls.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs): if dl is None: dl = self.dls[ds_idx] b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dls.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dls.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dls` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) learn.dls.device #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dls.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dls.train learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dls.valid learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataLoaders`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,9.366103172302246,9.3322114944458,9.3322114944458,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dls.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dls.train) test_eq(res[0], res[1]) x,y = learn.dls.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dls.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dls.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dls.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dls.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dls = DataLoaders(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `Datasets`/`DataLoaders` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(Datasets(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dls = DataLoaders(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,15.58240795135498,11.86860466003418,'00:00'] (#4) [0,12.968544960021973,9.779583930969238,'00:00'] (#4) [0,10.606522560119629,8.063138008117676,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dls self.dls = self.dls.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dls = old_dbunch #export def load_learner(fname, cpu=True): "Load a `Learner` object in `fname`, optionally putting it on the `cpu`" res = torch.load(fname, map_location='cpu' if cpu else None) if hasattr(res, 'to_fp32'): res = res.to_fp32() if cpu: res.dls.cpu() return res ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch # aug_preds.append(self.get_preds(dl=dl)[0][None]) aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds).mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.learner.ipynb. Converted 43_tabular.model.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit def __setattr__(self, name, value): if hasattr(self.learn,name): warn(f"You are setting an attribute ({name}) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.{name}` if you would like to change it in the learner.") super().__setattr__(name, value) @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2. It also issues a warning that something is probably wrong: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output /home/lgvaz/anaconda3/envs/dl_fork/lib/python3.7/site-packages/ipykernel_launcher.py:16: UserWarning: You are setting an attribute (a) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.a` if you would like to change it in the learner. app.launch_new_instance() ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dls.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack sort_by_run # export class Learner(): def __init__(self, dls, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dls,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dls.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dls, 'path', Path('.')) self.add_cbs([(cb() if isinstance(cb, type) else cb) for cb in L(defaults.callbacks)+L(cbs)]) self.model.to(self.dls.device) if hasattr(self.model, 'reset'): self.model.reset() self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dls, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dls.train; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffled=False, drop_last=False) cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(dl=dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None, with_input=False): dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) dec = self.dls.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0] i = getattr(self.dls, 'n_inp', -1) dec_inp,dec_targ = map(detuplify, [dec[:i],dec[i:]]) res = dec_targ,dec_preds[0],preds[0] if with_input: res = (dec_inp,) + res return res def show_results(self, ds_idx=1, dl=None, max_n=9, shuffle=True, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffle=shuffle) b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dls.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dls.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dls` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cbs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cbs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cbs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cbs=TestTrainEvalCallback, cuda=True) learn.fit(1) #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cbs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dls.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dls.train learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dls.valid learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataLoaders`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,17.91187858581543,19.128314971923828,19.128315925598145,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dls.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dls.train) test_eq(res[0], res[1]) x,y = learn.dls.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dls.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dls.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dls.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dls.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dls = DataLoaders(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `Datasets`/`DataLoaders` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(Datasets(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dls = DataLoaders(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(inp), [full_dec,dec,out]) test_eq(learn.predict(inp, with_input=True), [inp,full_dec,dec,out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cbs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cbs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,7.853846549987793,6.445760726928711,'00:00'] (#4) [0,6.233814239501953,5.162293434143066,'00:00'] (#4) [0,5.032419681549072,4.134268760681152,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dls self.dls = self.dls.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dls = old_dbunch #export def load_learner(fname, cpu=True): "Load a `Learner` object in `fname`, optionally putting it on the `cpu`" res = torch.load(fname, map_location='cpu' if cpu else None) if hasattr(res, 'to_fp32'): res = res.to_fp32() if cpu: res.dls.cpu() return res ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25, use_max=False): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds) aug_preds = aug_preds.max(0)[0] if use_max else aug_preds.mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) if use_max: return torch.stack([preds, aug_preds], 0).max(0)[0],targs preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) tfms = Cuda() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0) return DataBunch(train_dl, valid_dl) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output _____no_output_____ ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dbunch.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, cb_funcs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() #TODO: infer loss_func from data if loss_func is None: loss_func = getattr(dbunch.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dbunch, 'path', Path('.')) self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs)) self.add_cbs(cbs) self.model.to(self.dbunch.device) self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dbunch.train_dl; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dbunch.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dbunch.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, **kwargs): cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None): dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dbunch, 'n_inp', -1) full_dec = self.dbunch.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:] return detuplify(full_dec),dec_preds[0],preds[0] def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs): if dl is None: dl = self.dbunch.dls[ds_idx] b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dbunch.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dbunch.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cb_funcs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cb_funcs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cb_funcs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True) learn.fit(1) learn.dbunch.device #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dbunch.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dbunch.train_dl learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dbunch.valid_dl learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dbunch.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,10.865579605102539,10.633462905883789,10.633462905883789,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dbunch.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dbunch.train_dl) test_eq(res[0], res[1]) x,y = learn.dbunch.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dbunch.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dbunch.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dbunch.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dbunch.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dbunch = DataBunch(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(tensor([2.])), [full_dec, dec, out]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,21.60039710998535,23.18270492553711,'00:00'] (#4) [0,17.718852996826172,19.021663665771484,'00:00'] (#4) [0,14.590808868408203,15.608027458190918,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dbunch self.dbunch = self.dbunch.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dbunch = old_dbunch ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dbunch.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch # aug_preds.append(self.get_preds(dl=dl)[0][None]) aug_preds.append(self.get_preds(ds_idx)[0][None]) aug_preds = torch.cat(aug_preds).mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx) preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.learner.ipynb. Converted 43_tabular.model.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. Converted migrating.ipynb. ###Markdown Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem): ###Code from torch.utils.data import TensorDataset def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False): def get_data(n): x = torch.randn(int(bs*n)) return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n))) train_ds = get_data(n_train) valid_ds = get_data(n_valid) device = default_device() if cuda else None train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0) valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0) return DataLoaders(train_dl, valid_dl, device=device) class RegModel(Module): def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) def forward(self, x): return x*self.a + self.b ###Output _____no_output_____ ###Markdown Callback - ###Code #export _inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split() #export class Callback(GetAttr): "Basic class handling tweaks of the training loop by changing a `Learner` in various events" _default,learn,run,run_train,run_valid = 'learn',None,True,True,True def __repr__(self): return type(self).__name__ def __call__(self, event_name): "Call `self.{event_name}` if it's defined" _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or (self.run_valid and not getattr(self, 'training', False))) if self.run and _run: getattr(self, event_name, noop)() if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit def __setattr__(self, name, value): if hasattr(self.learn,name): warn(f"You are setting an attribute ({name}) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.{name}` if you would like to change it in the learner.") super().__setattr__(name, value) @property def name(self): "Name of the `Callback`, camel-cased and with '*Callback*' removed" return class2attr(self, 'Callback') ###Output _____no_output_____ ###Markdown The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up. ###Code show_doc(Callback.__call__) tst_cb = Callback() tst_cb.call_me = lambda: print("maybe") test_stdout(lambda: tst_cb("call_me"), "maybe") show_doc(Callback.__getattr__) ###Output _____no_output_____ ###Markdown This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`. ###Code mk_class('TstLearner', 'a') class TstCallback(Callback): def batch_begin(self): print(self.a) learn,cb = TstLearner(1),TstCallback() cb.learn = learn test_stdout(lambda: cb('batch_begin'), "1") ###Output _____no_output_____ ###Markdown Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2. It also issues a warning that something is probably wrong: ###Code class TstCallback(Callback): def batch_begin(self): self.a += 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.a, 2) test_eq(cb.learn.a, 1) ###Output /home/sgugger/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16: UserWarning: You are setting an attribute (a) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.a` if you would like to change it in the learner. app.launch_new_instance() ###Markdown A proper version needs to write `self.learn.a = self.a + 1`: ###Code class TstCallback(Callback): def batch_begin(self): self.learn.a = self.a + 1 learn,cb = TstLearner(1),TstCallback() cb.learn = learn cb('batch_begin') test_eq(cb.learn.a, 2) show_doc(Callback.name, name='Callback.name') test_eq(TstCallback().name, 'tst') class ComplicatedNameCallback(Callback): pass test_eq(ComplicatedNameCallback().name, 'complicated_name') ###Output _____no_output_____ ###Markdown TrainEvalCallback - ###Code #export class TrainEvalCallback(Callback): "`Callback` that tracks the number of iterations done and properly sets training/eval mode" run_valid = False def begin_fit(self): "Set the iter and epoch counters to 0, put the model and the right device" self.learn.train_iter,self.learn.pct_train = 0,0. self.model.to(self.dls.device) def after_batch(self): "Update the iter counter (in training mode)" self.learn.pct_train += 1./(self.n_iter*self.n_epoch) self.learn.train_iter += 1 def begin_train(self): "Set the model in training mode" self.learn.pct_train=self.epoch/self.n_epoch self.model.train() self.learn.training=True def begin_validate(self): "Set the model in validation mode" self.model.eval() self.learn.training=False show_doc(TrainEvalCallback, title_level=3) ###Output _____no_output_____ ###Markdown This `Callback` is automatically added in every `Learner` at initialization. ###Code #hide #test of the TrainEvalCallback below in Learner.fit show_doc(TrainEvalCallback.begin_fit) show_doc(TrainEvalCallback.after_batch) show_doc(TrainEvalCallback.begin_train) show_doc(TrainEvalCallback.begin_validate) ###Output _____no_output_____ ###Markdown GatherPredsCallback - ###Code #export #TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors. class GatherPredsCallback(Callback): "`Callback` that saves the predictions and targets, optionally `with_loss`" def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0): store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim") def begin_batch(self): if self.with_input: self.inputs.append((to_detach(self.xb))) def begin_validate(self): "Initialize containers" self.preds,self.targets = [],[] if self.with_input: self.inputs = [] if self.with_loss: self.losses = [] def after_batch(self): "Save predictions, targets and potentially losses" preds,targs = to_detach(self.pred),to_detach(self.yb) if self.save_preds is None: self.preds.append(preds) else: (self.save_preds/str(self.iter)).save_array(preds) if self.save_targs is None: self.targets.append(targs) else: (self.save_targs/str(self.iter)).save_array(targs[0]) if self.with_loss: bs = find_bs(self.yb) loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1) self.losses.append(to_detach(loss)) def after_fit(self): "Concatenate all recorded tensors" if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) if self.with_loss: self.losses = to_concat(self.losses) def all_tensors(self): res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets] if self.with_input: res = [self.inputs] + res if self.with_loss: res.append(self.losses) return res show_doc(GatherPredsCallback, title_level=3) show_doc(GatherPredsCallback.begin_validate) show_doc(GatherPredsCallback.after_batch) show_doc(GatherPredsCallback.after_fit) ###Output _____no_output_____ ###Markdown Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch). ###Code #export _ex_docs = dict( CancelFitException="Skip the rest of this batch and go to `after_batch`", CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`", CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`", CancelValidException="Skip the rest of this epoch and go to `after_epoch`", CancelBatchException="Interrupts training and go to `after_fit`") for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d) show_doc(CancelBatchException, title_level=3) show_doc(CancelTrainException, title_level=3) show_doc(CancelValidException, title_level=3) show_doc(CancelEpochException, title_level=3) show_doc(CancelFitException, title_level=3) ###Output _____no_output_____ ###Markdown You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit` ###Code # export _events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \ after_backward after_step after_cancel_batch after_batch after_cancel_train \ after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \ after_epoch after_cancel_fit after_fit') mk_class('event', **_events.map_dict(), doc="All possible events as attributes to get tab-completion and typo-proofing") _before_epoch = [event.begin_fit, event.begin_epoch] _after_epoch = [event.after_epoch, event.after_fit] # export _all_ = ['event'] show_doc(event, name='event', title_level=3) test_eq(event.after_backward, 'after_backward') ###Output _____no_output_____ ###Markdown Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*. ###Code #export _loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train', 'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train', 'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop', '**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate', 'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit', 'after_cancel_fit', 'after_fit'] #hide #Full test of the control flow below, after the Learner class ###Output _____no_output_____ ###Markdown Learner - ###Code # export defaults.lr = 1e-3 defaults.wd = 1e-2 defaults.callbacks = [TrainEvalCallback] # export def replacing_yield(o, attr, val): "Context manager to temporarily replace an attribute" old = getattr(o,attr) try: yield setattr(o,attr,val) finally: setattr(o,attr,old) #export def mk_metric(m): "Convert `m` to an `AvgMetric`, unless it's already a `Metric`" return m if isinstance(m, Metric) else AvgMetric(m) #export def save_model(file, model, opt, with_opt=True): "Save `model` to `file` along with `opt` (if available, and if `with_opt`)" if opt is None: with_opt=False state = get_model(model).state_dict() if with_opt: state = {'model': state, 'opt':opt.state_dict()} torch.save(state, file) # export def load_model(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(model_state, strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") # export def _try_concat(o): try: return torch.cat(o) except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L()) # export from contextlib import ExitStack # export class Learner(): def __init__(self, dls, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None, metrics=None, path=None, model_dir='models', wd=defaults.wd, wd_bn_bias=False, train_bn=True, moms=(0.95,0.85,0.95)): store_attr(self, "dls,model,opt_func,lr,splitter,model_dir,wd,wd_bn_bias,train_bn,metrics,moms") self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L() if loss_func is None: loss_func = getattr(dls.train_ds, 'loss_func', None) assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function." self.loss_func = loss_func self.path = path if path is not None else getattr(dls, 'path', Path('.')) self.add_cbs([(cb() if isinstance(cb, type) else cb) for cb in L(defaults.callbacks)+L(cbs)]) self.model.to(self.dls.device) if hasattr(self.model, 'reset'): self.model.reset() self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.) @property def metrics(self): return self._metrics @metrics.setter def metrics(self,v): self._metrics = L(v).map(mk_metric) def add_cbs(self, cbs): L(cbs).map(self.add_cb) def remove_cbs(self, cbs): L(cbs).map(self.remove_cb) def add_cb(self, cb): old = getattr(self, cb.name, None) assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered" cb.learn = self setattr(self, cb.name, cb) self.cbs.append(cb) return self def remove_cb(self, cb): cb.learn = None if hasattr(self, cb.name): delattr(self, cb.name) if cb in self.cbs: self.cbs.remove(cb) @contextmanager def added_cbs(self, cbs): self.add_cbs(cbs) yield self.remove_cbs(cbs) def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)] def __call__(self, event_name): L(event_name).map(self._call_one) def _call_one(self, event_name): assert hasattr(event, event_name) [cb(event_name) for cb in sort_by_run(self.cbs)] def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state) def create_opt(self): self.opt = self.opt_func(self.splitter(self.model), lr=self.lr) if not self.wd_bn_bias: for p in self._bn_bias_state(True ): p['do_wd'] = False if self.train_bn: for p in self._bn_bias_state(False): p['force_train'] = True def _split(self, b): i = getattr(self.dls, 'n_inp', 1 if len(b)==1 else len(b)-1) self.xb,self.yb = b[:i],b[i:] def all_batches(self): self.n_iter = len(self.dl) for o in enumerate(self.dl): self.one_batch(*o) def one_batch(self, i, b): self.iter = i try: self._split(b); self('begin_batch') self.pred = self.model(*self.xb); self('after_pred') if len(self.yb) == 0: return self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') if not self.training: return self.loss.backward(); self('after_backward') self.opt.step(); self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def _do_begin_fit(self, n_epoch): self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit') def _do_epoch_train(self): try: self.dl = self.dls.train; self('begin_train') self.all_batches() except CancelTrainException: self('after_cancel_train') finally: self('after_train') def _do_epoch_validate(self, ds_idx=1, dl=None): if dl is None: dl = self.dls[ds_idx] names = ['shuffle', 'drop_last'] try: dl,old,has = change_attrs(dl, names, [False,False]) self.dl = dl; self('begin_validate') with torch.no_grad(): self.all_batches() except CancelValidException: self('after_cancel_validate') finally: dl,*_ = change_attrs(dl, names, old, has); self('after_validate') def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False): with self.added_cbs(cbs): if reset_opt or not self.opt: self.create_opt() self.opt.set_hypers(wd=self.wd if wd is None else wd, lr=self.lr if lr is None else lr) try: self._do_begin_fit(n_epoch) for epoch in range(n_epoch): try: self.epoch=epoch; self('begin_epoch') self._do_epoch_train() self._do_epoch_validate() except CancelEpochException: self('after_cancel_epoch') finally: self('after_epoch') except CancelFitException: self('after_cancel_fit') finally: self('after_fit') def validate(self, ds_idx=1, dl=None, cbs=None): if dl is None: dl = self.dls[ds_idx] with self.added_cbs(cbs), self.no_logging(), self.no_mbar(): self(_before_epoch) self._do_epoch_validate(ds_idx, dl) self(_after_epoch) return self.recorder.values[-1] @delegates(GatherPredsCallback.__init__) def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, act=None, inner=False, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffled=False, drop_last=False) cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs) #with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar(): ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()] if with_loss: ctx_mgrs.append(self.loss_not_reduced()) with ExitStack() as stack: for mgr in ctx_mgrs: stack.enter_context(mgr) self(event.begin_epoch if inner else _before_epoch) self._do_epoch_validate(dl=dl) self(event.after_epoch if inner else _after_epoch) if act is None: act = getattr(self.loss_func, 'activation', noop) res = cb.all_tensors() pred_i = 1 if with_input else 0 if res[pred_i] is not None: res[pred_i] = act(res[pred_i]) if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i])) return tuple(res) def predict(self, item, rm_type_tfms=None, with_input=False): dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) dec = self.dls.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0] i = getattr(self.dls, 'n_inp', -1) dec_inp,dec_targ = map(detuplify, [dec[:i],dec[i:]]) res = dec_targ,dec_preds[0],preds[0] if with_input: res = (dec_inp,) + res return res def show_results(self, ds_idx=1, dl=None, max_n=9, shuffle=True, **kwargs): if dl is None: dl = self.dls[ds_idx].new(shuffle=shuffle) b = dl.one_batch() _,_,preds = self.get_preds(dl=[b], with_decoded=True) self.dls.show_results(b, preds, max_n=max_n, **kwargs) def show_training_loop(self): indent = 0 for s in _loop: if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2 elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}') else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s)) @contextmanager def no_logging(self): return replacing_yield(self, 'logger', noop) @contextmanager def no_mbar(self): return replacing_yield(self, 'create_mbar', False) @contextmanager def loss_not_reduced(self): if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none') else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none')) def save(self, file, with_opt=True): if rank_distrib(): return # don't save if slave proc file = join_path_file(file, self.path/self.model_dir, ext='.pth') save_model(file, self.model, getattr(self,'opt',None), with_opt) def load(self, file, with_opt=None, device=None, strict=True): if device is None: device = self.dls.device if self.opt is None: self.create_opt() distrib_barrier() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict) return self Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i])) #export add_docs(Learner, "Group together a `model`, some `dls` and a `loss_func` to handle training", add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner", add_cb="Add `cb` to the list of `Callback` and register `self` as their learner", remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner", remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner", added_cbs="Context manage that temporarily adds `cbs`", ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop", create_opt="Create an optimizer with `lr`", one_batch="Train or evaluate `self.model` on batch `(xb,yb)`", all_batches="Train or evaluate `self.model` on all batches of `self.dl`", fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.", validate="Validate on `dl` with potential new `cbs`.", get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`", predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities", show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`", show_training_loop="Show each step in the training loop", no_logging="Context manager to temporarily remove `logger`", no_mbar="Context manager to temporarily prevent the master progress bar from being created", loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.", save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`", load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`" ) ###Output _____no_output_____ ###Markdown `opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop ###Code #Test init with callbacks def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs): data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda) return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs) tst_learn = synth_learner() test_eq(len(tst_learn.cbs), 1) assert isinstance(tst_learn.cbs[0], TrainEvalCallback) assert hasattr(tst_learn, ('train_eval')) tst_learn = synth_learner(cbs=TstCallback()) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) tst_learn = synth_learner(cbs=TstCallback) test_eq(len(tst_learn.cbs), 2) assert isinstance(tst_learn.cbs[1], TstCallback) assert hasattr(tst_learn, ('tst')) #A name that becomes an existing attribute of the Learner will throw an exception (here add_cb) class AddCbCallback(Callback): pass test_fail(lambda: synth_learner(cbs=AddCbCallback())) show_doc(Learner.fit) #Training a few epochs should make the model better learn = synth_learner(cbs=TstCallback, lr=1e-2) learn.model = learn.model.cpu() xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(6) assert learn.loss < init_loss #hide #Test of TrainEvalCallback class TestTrainEvalCallback(Callback): run_after,run_valid = TrainEvalCallback,False def begin_fit(self): test_eq([self.pct_train,self.train_iter], [0., 0]) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb)) def after_batch(self): assert self.training test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch)) test_eq(self.train_iter, self.old_train_iter+1) self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter def begin_train(self): assert self.training and self.model.training test_eq(self.pct_train, self.epoch/self.n_epoch) self.old_pct_train = self.pct_train def begin_validate(self): assert not self.training and not self.model.training learn = synth_learner(cbs=TestTrainEvalCallback) learn.fit(1) #Check order is properly taken into account learn.cbs = L(reversed(learn.cbs)) #hide #cuda #Check model is put on the GPU if needed learn = synth_learner(cbs=TestTrainEvalCallback, cuda=True) learn.fit(1) #hide #Check wd is not applied on bn/bias when option wd_bn_bias=False class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): p.grad = torch.ones_like(p.data) learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cbs=_PutGrad) learn.model = _TstModel() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, lr=1e-2) end = list(learn.model.tst.parameters()) for i in [0]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i])) for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) show_doc(Learner.one_batch) ###Output _____no_output_____ ###Markdown This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation. ###Code # export class VerboseCallback(Callback): "Callback that prints the name of each event called" def __call__(self, event_name): print(event_name) super().__call__(event_name) #hide class TestOneBatch(VerboseCallback): def __init__(self, xb, yb, i): self.save_xb,self.save_yb,self.i = xb,yb,i self.old_pred,self.old_loss = None,tensor(0.) def begin_batch(self): self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_eq(self.iter, self.i) test_eq(self.save_xb, *self.xb) test_eq(self.save_yb, *self.yb) if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred) def after_pred(self): self.old_pred = self.pred test_eq(self.pred, self.model.a.data * self.x + self.model.b.data) test_eq(self.loss, self.old_loss) def after_loss(self): self.old_loss = self.loss test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb)) for p in self.model.parameters(): if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.])) def after_backward(self): self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean() self.grad_b = 2 * (self.pred.data - self.y).mean() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) test_eq(self.model.a.data, self.old_a) test_eq(self.model.b.data, self.old_b) def after_step(self): test_close(self.model.a.data, self.old_a - self.lr * self.grad_a) test_close(self.model.b.data, self.old_b - self.lr * self.grad_b) self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone() test_close(self.model.a.grad.data, self.grad_a) test_close(self.model.b.grad.data, self.grad_b) def after_batch(self): for p in self.model.parameters(): test_eq(p.grad, tensor([0.])) #hide learn = synth_learner() b = learn.dls.one_batch() learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2) #Remove train/eval learn.cbs = learn.cbs[1:] #Setup learn.loss,learn.training = tensor(0.),True learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.model.train() batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch show_doc(Learner.all_batches) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) learn.opt = SGD(learn.model.parameters(), lr=learn.lr) with redirect_stdout(io.StringIO()): learn._do_begin_fit(1) learn.epoch,learn.dl = 0,learn.dls.train learn('begin_epoch') learn('begin_train') test_stdout(learn.all_batches, '\n'.join(batch_events * 5)) test_eq(learn.train_iter, 5) valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] with redirect_stdout(io.StringIO()): learn.dl = learn.dls.valid learn('begin_validate') test_stdout(learn.all_batches, '\n'.join(valid_events * 2)) test_eq(learn.train_iter, 5) #hide learn = synth_learner(n_train=5, cbs=VerboseCallback()) test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit') test_eq(learn.n_epoch, 42) test_eq(learn.loss, tensor(0.)) #hide learn.opt = SGD(learn.model.parameters(), lr=learn.lr) learn.epoch = 0 test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train'])) #hide test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate'])) ###Output _____no_output_____ ###Markdown Serializing ###Code show_doc(Learner.save) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. ###Code show_doc(Learner.load) ###Output _____no_output_____ ###Markdown `file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved. ###Code learn = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit(1) learn.save('tmp') assert (Path.cwd()/'models/tmp.pth').exists() learn1 = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_eq(learn.opt.state_dict(), learn1.opt.state_dict()) learn.save('tmp1', with_opt=False) learn1 = synth_learner(cbs=TstCallback, opt_func=partial(SGD, mom=0.9)) learn1 = learn1.load('tmp1') test_eq(learn.model.a, learn1.model.a) test_eq(learn.model.b, learn1.model.b) test_ne(learn.opt.state_dict(), learn1.opt.state_dict()) shutil.rmtree('models') ###Output _____no_output_____ ###Markdown Callback handling ###Code show_doc(Learner.__call__) show_doc(Learner.add_cb) learn = synth_learner() learn.add_cb(TestTrainEvalCallback()) test_eq(len(learn.cbs), 2) assert isinstance(learn.cbs[1], TestTrainEvalCallback) test_eq(learn.train_eval.learn, learn) show_doc(Learner.add_cbs) learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()]) test_eq(len(learn.cbs), 4) show_doc(Learner.remove_cb) cb = learn.cbs[1] learn.remove_cb(learn.cbs[1]) test_eq(len(learn.cbs), 3) assert cb.learn is None assert not getattr(learn,'test_train_eval',None) show_doc(Learner.remove_cbs) cb = learn.cbs[1] learn.remove_cbs(learn.cbs[1:]) test_eq(len(learn.cbs), 1) ###Output _____no_output_____ ###Markdown When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataLoaders`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing ###Code #hide batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch'] batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch'] train_events = ['begin_train'] + batch_events + ['after_train'] valid_events = ['begin_validate'] + batchv_events + ['after_validate'] epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch'] cycle_events = ['begin_fit'] + epoch_events + ['after_fit'] #hide learn = synth_learner(n_train=1, n_valid=1) test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events)) #hide class TestCancelCallback(VerboseCallback): def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None): def _interrupt(): if train is None or train == self.training: raise exception() setattr(self, cancel_at, _interrupt) #hide #test cancel batch for i,e in enumerate(batch_events[:-1]): be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch'] bev = be if i <3 else batchv_events cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle)) #CancelBatchException not caught if thrown in any other event for e in cycle_events: if e not in batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(cancel_at=e) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else []) be += ['after_cancel_train', 'after_train'] cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle)) #CancelTrainException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_train'] + batch_events[:-1]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelTrainException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate'] cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle)) #CancelValidException not caught if thrown in any other event for e in cycle_events: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelValidException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel epoch #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle)) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)), '\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:])) #CancelEpochException not caught if thrown in any other event for e in ['begin_fit', 'after_epoch', 'after_fit']: if e not in ['begin_validate'] + batch_events[:3]: with redirect_stdout(io.StringIO()): cb = TestCancelCallback(e, CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually #hide #test cancel fit #In begin fit test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)), '\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit'])) #In begin epoch test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)), '\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit'])) #In train for i,e in enumerate(['begin_train'] + batch_events): be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else []) cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle)) #In valid for i,e in enumerate(['begin_validate'] + batchv_events): bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else []) cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit'] test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle)) #CancelEpochException not caught if thrown in any other event with redirect_stdout(io.StringIO()): cb = TestCancelCallback('after_fit', CancelEpochException) test_fail(lambda: learn.fit(1, cbs=cb)) learn.remove_cb(cb) #Have to remove it manually ###Output _____no_output_____ ###Markdown Metrics - ###Code #export @docs class Metric(): "Blueprint for defining a metric" def reset(self): pass def accumulate(self, learn): pass @property def value(self): raise NotImplementedError @property def name(self): return class2attr(self, 'Metric') _docs = dict( reset="Reset inner state to prepare for new computation", name="Name of the `Metric`, camel-cased and with Metric removed", accumulate="Use `learn` to update the state with new results", value="The value of the metric") show_doc(Metric, title_level=3) ###Output _____no_output_____ ###Markdown Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your Metric has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks. ###Code show_doc(Metric.reset) show_doc(Metric.accumulate) show_doc(Metric.value, name='Metric.value') show_doc(Metric.name, name='Metric.name') #export def _maybe_reduce(val): if num_distrib()>1: val = val.clone() torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM) val /= num_distrib() return val #export class AvgMetric(Metric): "Average the values of `func` taking into account potential different batch sizes" def __init__(self, func): self.func = func def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(self.func(learn.pred, *learn.yb))*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__ show_doc(AvgMetric, title_level=3) learn = synth_learner() tst = AvgMetric(lambda x,y: (x-y).abs().mean()) t,u = torch.randn(100),torch.randn(100) tst.reset() for i in range(0,100,25): learn.pred,learn.yb = t[i:i+25],(u[i:i+25],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],) tst.accumulate(learn) test_close(tst.value, (t-u).abs().mean()) #export class AvgLoss(Metric): "Average the losses taking into account potential different batch sizes" def reset(self): self.total,self.count = 0.,0 def accumulate(self, learn): bs = find_bs(learn.yb) self.total += to_detach(learn.loss.mean())*bs self.count += bs @property def value(self): return self.total/self.count if self.count != 0 else None @property def name(self): return "loss" show_doc(AvgLoss, title_level=3) tst = AvgLoss() t = torch.randn(100) tst.reset() for i in range(0,100,25): learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #hide #With varying batch size tst.reset() splits = [0, 30, 50, 60, 100] for i in range(len(splits )-1): learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean() tst.accumulate(learn) test_close(tst.value, t.mean()) #export class AvgSmoothLoss(Metric): "Smooth average of the losses (exponentially weighted with `beta`)" def __init__(self, beta=0.98): self.beta = beta def reset(self): self.count,self.val = 0,tensor(0.) def accumulate(self, learn): self.count += 1 self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta) @property def value(self): return self.val/(1-self.beta**self.count) show_doc(AvgSmoothLoss, title_level=3) tst = AvgSmoothLoss() t = torch.randn(100) tst.reset() val = tensor(0.) for i in range(4): learn.loss = t[i*25:(i+1)*25].mean() tst.accumulate(learn) val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98) test_close(val/(1-0.98**(i+1)), tst.value) ###Output _____no_output_____ ###Markdown Recorder -- ###Code #export from fastprogress.fastprogress import format_time def _maybe_item(t): t = t.value return t.item() if isinstance(t, Tensor) and t.numel()==1 else t #export class Recorder(Callback): "Callback that registers statistics (lr, loss and metrics) during training" run_after = TrainEvalCallback def __init__(self, add_time=True, train_metrics=False, valid_metrics=True, beta=0.98): store_attr(self, 'add_time,train_metrics,valid_metrics') self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta) def begin_fit(self): "Prepare state for training" self.lrs,self.iters,self.losses,self.values = [],[],[],[] names = self.metrics.attrgot('name') if self.train_metrics and self.valid_metrics: names = L('loss') + names names = names.map('train_{}') + names.map('valid_{}') elif self.valid_metrics: names = L('train_loss', 'valid_loss') + names else: names = L('train_loss') + names if self.add_time: names.append('time') self.metric_names = 'epoch'+names self.smooth_loss.reset() def after_batch(self): "Update all metrics and records lr and smooth loss in training" if len(self.yb) == 0: return mets = self._train_mets if self.training else self._valid_mets for met in mets: met.accumulate(self.learn) if not self.training: return self.lrs.append(self.opt.hypers[-1]['lr']) self.losses.append(self.smooth_loss.value) self.learn.smooth_loss = self.smooth_loss.value def begin_epoch(self): "Set timer if `self.add_time=True`" self.cancel_train,self.cancel_valid = False,False if self.add_time: self.start_epoch = time.time() self.log = L(getattr(self, 'epoch', 0)) def begin_train (self): self._train_mets[1:].map(Self.reset()) def begin_validate(self): self._valid_mets.map(Self.reset()) def after_train (self): self.log += self._train_mets.map(_maybe_item) def after_validate(self): self.log += self._valid_mets.map(_maybe_item) def after_cancel_train(self): self.cancel_train = True def after_cancel_validate(self): self.cancel_valid = True def after_epoch(self): "Store and log the loss/metric values" self.values.append(self.log[1:].copy()) if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) self.logger(self.log) self.iters.append(self.smooth_loss.count) @property def _train_mets(self): if getattr(self, 'cancel_train', False): return L() return L(self.smooth_loss) + (self.metrics if self.train_metrics else L()) @property def _valid_mets(self): if getattr(self, 'cancel_valid', False): return L() return (L(self.loss) + self.metrics if self.valid_metrics else L()) def plot_loss(self, skip_start=5, with_valid=True): plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train') if with_valid: idx = (np.array(self.iters)<skip_start).sum() plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid') plt.legend() #export add_docs(Recorder, begin_train = "Reset loss and metrics state", after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)", begin_validate = "Reset loss and metrics state", after_validate = "Log loss and metric values on the validation set", after_cancel_train = "Ignore training metrics for this epoch", after_cancel_validate = "Ignore validation metrics for this epoch", plot_loss = "Plot the losses from `skip_start` and onward") defaults.callbacks = [TrainEvalCallback, Recorder] ###Output _____no_output_____ ###Markdown By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`). ###Code #Test printed output def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_train=5, metrics=tst_metric) pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']" test_stdout(lambda: learn.fit(1), pat, regex=True) #hide class TestRecorderCallback(Callback): run_after=Recorder def begin_fit(self): self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time self.beta = self.recorder.smooth_loss.beta for m in self.metrics: assert isinstance(m, Metric) test_eq(self.recorder.smooth_loss.val, 0.) #To test what the recorder logs, we use a custom logger function. self.learn.logger = self.test_log self.old_smooth,self.count = tensor(0.),0 def after_batch(self): if self.training: self.count += 1 test_eq(len(self.recorder.lrs), self.count) test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr']) test_eq(len(self.recorder.losses), self.count) smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta) smooth /= 1 - self.beta**self.count test_close(self.recorder.losses[-1], smooth, eps=1e-4) test_close(self.smooth_loss, smooth, eps=1e-4) self.old_smooth = self.smooth_loss self.bs += find_bs(self.yb) if not self.training: test_eq(self.recorder.loss.count, self.bs) if self.train_metrics or not self.training: for m in self.metrics: test_eq(m.count, self.bs) self.losses.append(self.loss.detach().cpu()) def begin_epoch(self): if self.add_time: self.start_epoch = time.time() self.log = [self.epoch] def begin_train(self): self.bs = 0 self.losses = [] for m in self.recorder._train_mets: test_eq(m.count, self.bs) def after_train(self): mean = tensor(self.losses).mean() self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss] test_eq(self.log, self.recorder.log) self.losses = [] def begin_validate(self): self.bs = 0 self.losses = [] for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs) def test_log(self, log): res = tensor(self.losses).mean() self.log += [res, res] if self.add_time: self.log.append(format_time(time.time() - self.start_epoch)) test_eq(log, self.log) #hide learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.recorder.train_metrics=True learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time']) learn = synth_learner(n_train=5, metrics = tst_metric, cbs = TestRecorderCallback) learn.recorder.add_time=False learn.fit(1) test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric']) #hide #Test numpy metric def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy() learn = synth_learner(n_train=5, metrics=tst_metric_np) learn.fit(1) ###Output (#5) [0,26.88559341430664,25.375843048095703,25.375842094421387,'00:00'] ###Markdown Callback internals ###Code show_doc(Recorder.begin_fit) show_doc(Recorder.begin_epoch) show_doc(Recorder.begin_validate) show_doc(Recorder.after_batch) show_doc(Recorder.after_epoch) ###Output _____no_output_____ ###Markdown Plotting tools ###Code show_doc(Recorder.plot_loss) #hide learn.recorder.plot_loss(skip_start=1) ###Output _____no_output_____ ###Markdown Inference functions ###Code show_doc(Learner.no_logging) learn = synth_learner(n_train=5, metrics=tst_metric) with learn.no_logging(): test_stdout(lambda: learn.fit(1), '') test_eq(learn.logger, print) show_doc(Learner.validate) #Test result learn = synth_learner(n_train=5, metrics=tst_metric) res = learn.validate() test_eq(res[0], res[1]) x,y = learn.dls.valid_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #hide #Test other dl res = learn.validate(dl=learn.dls.train) test_eq(res[0], res[1]) x,y = learn.dls.train_ds.tensors test_close(res[0], F.mse_loss(learn.model(x), y)) #Test additional callback is executed. cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:] test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle)) show_doc(Learner.loss_not_reduced) #hide test_eq(learn.loss_func.reduction, 'mean') with learn.loss_not_reduced(): test_eq(learn.loss_func.reduction, 'none') x,y = learn.dls.one_batch() p = learn.model(x) losses = learn.loss_func(p, y) test_eq(losses.shape, y.shape) test_eq(losses, F.mse_loss(p,y, reduction='none')) test_eq(learn.loss_func.reduction, 'mean') show_doc(Learner.get_preds) ###Output _____no_output_____ ###Markdown Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none' ###Code #Test result learn = synth_learner(n_train=5, metrics=tst_metric) preds,targs = learn.get_preds() x,y = learn.dls.valid_ds.tensors test_eq(targs, y) test_close(preds, learn.model(x)) preds,targs = learn.get_preds(act = torch.sigmoid) test_eq(targs, y) test_close(preds, torch.sigmoid(learn.model(x))) #Test get_preds work with ds not evenly dividble by bs learn = synth_learner(n_train=2.5, metrics=tst_metric) preds,targs = learn.get_preds(ds_idx=0) #hide #Test other dataset x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, y) test_close(preds, learn.model(x)) #Test with loss preds,targs,losses = learn.get_preds(dl=dl, with_loss=True) test_eq(targs, y) test_close(preds, learn.model(x)) test_close(losses, F.mse_loss(preds, targs, reduction='none')) #Test with inputs inps,preds,targs = learn.get_preds(dl=dl, with_input=True) test_eq(inps,x) test_eq(targs, y) test_close(preds, learn.model(x)) #hide #Test with no target learn = synth_learner(n_train=5) x = torch.randn(16*5) dl = TfmdDL(TensorDataset(x), bs=16) preds,targs = learn.get_preds(dl=dl) assert targs is None #hide #Test with targets that are tuples def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y) learn = synth_learner(n_train=5) x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.dls.n_inp=1 learn.loss_func = _fake_loss dl = TfmdDL(TensorDataset(x, y, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_eq(targs, [y,y]) #hide #Test with inputs that are tuples class _TupleModel(Module): def __init__(self, model): self.model=model def forward(self, x1, x2): return self.model(x1) learn = synth_learner(n_train=5) #learn.dls.n_inp=2 x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) learn.model = _TupleModel(learn.model) learn.dls = DataLoaders(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16)) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) test_eq(inps, [x,x]) #hide #Test auto activation function is picked learn = synth_learner(n_train=5) learn.loss_func = BCEWithLogitsLossFlat() x = torch.randn(16*5) y = 2*x + 3 + 0.1*torch.randn(16*5) dl = TfmdDL(TensorDataset(x, y), bs=16) preds,targs = learn.get_preds(dl=dl) test_close(preds, torch.sigmoid(learn.model(x))) inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True) tst = learn.get_preds(ds_idx=0, with_input=True, with_decoded=True) show_doc(Learner.predict) ###Output _____no_output_____ ###Markdown It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `Datasets`/`DataLoaders` ###Code class _FakeLossFunc(Module): reduction = 'none' def forward(self, x, y): return F.mse_loss(x,y) def activation(self, x): return x+1 def decodes(self, x): return 2*x class _Add1(Transform): def encodes(self, x): return x+1 def decodes(self, x): return x-1 learn = synth_learner(n_train=5) dl = TfmdDL(Datasets(torch.arange(50), tfms = [L(), [_Add1()]])) learn.dls = DataLoaders(dl, dl) learn.loss_func = _FakeLossFunc() inp = tensor([2.]) out = learn.model(inp).detach()+1 #applying model + activation dec = 2*out #decodes from loss function full_dec = dec-1 #decodes from _Add1 test_eq(learn.predict(inp), [full_dec,dec,out]) test_eq(learn.predict(inp, with_input=True), [inp,full_dec,dec,out]) #export class FetchPreds(Callback): "A callback to fetch predictions during the training loop" def __init__(self, ds_idx=1, dl=None, with_input=False, with_decoded=False): store_attr(self, 'ds_idx,dl,with_input,with_decoded') def after_validate(self): learn,rec = self.learn,self.learn.recorder learn.remove_cbs([self,rec]) self.preds = learn.get_preds(ds_idx=self.ds_idx, dl=self.dl, with_input=self.with_input, with_decoded=self.with_decoded, inner=True) learn.add_cbs([self, rec]) ###Output _____no_output_____ ###Markdown Transfer learning ###Code #export @patch def freeze_to(self:Learner, n): if self.opt is None: self.create_opt() self.opt.freeze_to(n) self.opt.clear_state() @patch def freeze(self:Learner): self.freeze_to(-1) @patch def unfreeze(self:Learner): self.freeze_to(0) add_docs(Learner, freeze_to="Freeze parameter groups up to `n`", freeze="Freeze up to last parameter group", unfreeze="Unfreeze the entire model") #hide class _TstModel(nn.Module): def __init__(self): super().__init__() self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1)) self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3)) self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3) def forward(self, x): return x * self.a + self.b class _PutGrad(Callback): def after_backward(self): for p in self.learn.model.tst.parameters(): if p.requires_grad: p.grad = torch.ones_like(p.data) def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]] learn = synth_learner(n_train=5, opt_func = partial(SGD), cbs=_PutGrad, splitter=_splitter, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained even frozen since `train_bn=True` by default for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) #hide learn = synth_learner(n_train=5, opt_func = partial(SGD), cbs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2) learn.model = _TstModel() learn.freeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were not trained for i in range(4): test_close(end[i],init[i]) learn.freeze_to(-2) init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear was not trained for i in [0,1]: test_close(end[i],init[i]) #bn was trained for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i])) learn.unfreeze() init = [p.clone() for p in learn.model.tst.parameters()] learn.fit(1, wd=0.) end = list(learn.model.tst.parameters()) #linear and bn were trained for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3) ###Output (#4) [0,7.853846549987793,6.445760726928711,'00:00'] (#4) [0,6.233814239501953,5.162293434143066,'00:00'] (#4) [0,5.032419681549072,4.134268760681152,'00:00'] ###Markdown Exporting a `Learner` ###Code #export @patch def export(self:Learner, fname='export.pkl'): "Export the content of `self` without the items and the optimizer state for inference" if rank_distrib(): return # don't export if slave proc old_dbunch = self.dls self.dls = self.dls.new_empty() state = self.opt.state_dict() self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter("ignore") torch.save(self, self.path/fname) self.create_opt() self.opt.load_state_dict(state) self.dls = old_dbunch #export def load_learner(fname, cpu=True): "Load a `Learner` object in `fname`, optionally putting it on the `cpu`" res = torch.load(fname, map_location='cpu' if cpu else None) if hasattr(res, 'to_fp32'): res = res.to_fp32() if cpu: res.dls.cpu() return res ###Output _____no_output_____ ###Markdown TTA ###Code #export @patch def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25, use_max=False): "Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation" if dl is None: dl = self.dls[ds_idx] if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms) with dl.dataset.set_split_idx(0), self.no_mbar(): if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n))) aug_preds = [] for i in self.progress.mbar if hasattr(self,'progress') else range(n): self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch aug_preds.append(self.get_preds(ds_idx, inner=True)[0][None]) aug_preds = torch.cat(aug_preds) aug_preds = aug_preds.max(0)[0] if use_max else aug_preds.mean(0) self.epoch = n with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx, inner=True) if use_max: return torch.stack([preds, aug_preds], 0).max(0)[0],targs preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta) return preds,targs ###Output _____no_output_____ ###Markdown In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb.
Part 7 - Loading Image Data.ipynb
###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `../Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '../Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. Need to download photos from Kaggle for this notebook to run ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code # Practice loading in and transforming the data, normally we will be loading train and test and augmenting the training set # Train Data data_dir = '../Cat_Dog_data/train' # Transforms transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Load and transform images dataset = datasets.ImageFolder(data_dir, transform=transforms) # load data into generator dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code # Data folder where images are stored locally data_dir = 'Cat_Dog_data' # Define transforms for the training data # Augmentations to allow the model to generalize better train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Define transforms for test images # Don't want to do any data augmentation, just resize, crop and normalize test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) ###Output _____no_output_____ ###Markdown What are the sizes of the transformed images? Do the training images and the testing images need to be the same size? Are they as I have it set up above? ###Code # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far, we have been using datasets which were pre-processed and this will not be the case for real datasets. Instead, you will deal with datasets which contain full-sized images which you can get by your smart-phone camera. We are going to see how to load images and how to use them to train neural-networks. We are going to use a [cat and dogs dataset](https://www.kaggle.com/c/dogs-vs-cats) dataset available by Kaggle. In the end, we want to train a neural network able to differentiate between cats and dogs. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is `datasets.ImageFolder` from `torchvision`. `dataset = data.ImageFolder("path_to_folder", transform = transform)` The `path_to_folder` is a path to the folder where we want download the data and [`transforms`](https://pytorch.org/vision/stable/transforms.html) is the list of preprocessing methods within the module [`torchvision`](https://pytorch.org/vision/stable/index.html). The `data.ImageFolder` function expects the folders to be organized in a specific way: ```root\dog\xxx.png root\dog\xxy.png root\dog\xxz.png root\cat\xxx.png root\cat\xxy.png root\cat\xxz.png```where each class is a ditectory in the `root` directory. So, when the dataset will be loaded, the images in a specific folder will be labeled as the parent folder. TransformsWhen you load the data with `data.ImageFolder` you can perfom some transformations on the loaded dataset. For example, if we have images of different dimension, we need to have them with the same size. So, you can use `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`. We need also to transform the images in PyTorch tensors with `transforms.ToTensor()`. Typicallt, you have to compose all these transformations in a pipeline `transforms.Compose()` which accepts a list of transforms and make them in the order as the list order. For example:```transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), ransforms.ToTensor()])``` Data LoadersWith the `ImageFolder` loaded, we have to pass it to a `DataLoader` which takes a dataset with the specific structure we have seen, thus coherent with the `ImageFolder`. It returns batches of images with the respective labels. It is possible to set: 1. the batch size 2. if the data after each epoch is shuffled or not` datatrain = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)`. The `dataloader` is a [generator](). So, to get the data out of it you have to: 1. loop through it 2. convert it in an iterator and call `naxt()```` we loop through it and we get the batches at each loop (until we haven't passed all the data)for images, labels in trainloader: pass get one batchimages, labels = next(iter(trainloader))``` Now, we load the images of training and we set some transformations on the dataset, finally, we load the dataset. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform = transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown Data AugmentationA common way to train a neural network is to introduce some randomness in the input data to avoid overfitting. So, to make the network generalize. We can randomly: 1. rotate 2. mirror 3. scale 4. crop images during training. ```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```Typically, you want to normalize the images. You pass: 1. a list of means2. a list of standard deviations So the input channels are normalized as: `input[channel] = (input[channel]-mean[channel])/std[channel]` Normalizing helps keeping the weights near zero, thus it helps to take the training pahse stable (backpropagation). Without the normalization, the training typically fails. While we are in the testing phase, we would like to use the unaltered images. So, for validation and testing you will not normalize. Now, we are going to define the `trainloader` and the `testloader, but for now without doing the normalization for the trainloader. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.Resize(50), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(51), transforms.CenterCrop(50), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) print(images[ii].shape) ###Output torch.Size([3, 49, 49]) torch.Size([3, 49, 49]) torch.Size([3, 49, 49]) torch.Size([3, 49, 49]) ###Markdown Now, we have 2 different folders: 1. train folder 2. test folder So, we can use them in order to classify the images of cats and dogs. Now, we train our network. ###Code from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(7500,256) self.fc2 = nn.Linear(256,32) self.fc3 = nn.Linear(32,2) def forward(self, x): x = x.view(x.shape[0],-1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.log_softmax(self.fc3(x), dim=1) return x ###Output _____no_output_____ ###Markdown Now we can do the whole training: 1. forward pass 2. backward pass ###Code model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr = 0.003) epochs = 5 train_losses, test_losses = [], [] for e in range(epochs): current_loss = 0.0 for images, labels in trainloader: # when computing the predictions on a new batch # we have to throw away the gradients previously computed optimizer.zero_grad() # images are already flattened thanks to the preparation # of the datasets log_ps = model(images) loss = criterion(log_ps, labels) current_loss += loss.item() # backward pass loss.backward() # update the parameters optimizer.step() else: # we are out of an epoch accuracy = 0.0 test_loss = 0.0 with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) log_ps, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.float)) train_losses.append(current_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}".format(e+1, epochs)) print("Train Loss: {:.3f}: ".format(current_loss/len(trainloader))) print("Test Loss: {:.3f}: ".format(test_loss/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'D:/Datasets/Cat_Dog_data/train' transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code # data_dir = 'D:/Datasets/Cat_Dog_data' data_dir = '/Users/aravindagayan/Documents/Projects/DataSets/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255), transforms.RandomResizedCrop(224), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) print(images.size()) print(labels.size()) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output torch.Size([32, 3, 224, 224]) torch.Size([32]) ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code data_dir = 'D:/Datasets/Cat_Dog_data/train' transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.Resize(30), transforms.CenterCrop(28), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) # helper.imshow(images[0], normalize=False) helper.imshow(images[0,:]); # data_dir = 'D:/Datasets/Cat_Dog_data' data_dir = '/Users/aravindagayan/Documents/Projects/DataSets/Cat_Dog_data' train_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.Resize(30), transforms.RandomResizedCrop(28), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5],[0.5])]) test_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.Resize(30), transforms.CenterCrop(28), transforms.ToTensor(), transforms.Normalize([0.5],[0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=8) testloader = torch.utils.data.DataLoader(test_data, batch_size=8) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) print(images.size()) print(labels.size()) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F import helper import fc_model # Create the network, define the criterion and optimizer model = fc_model.Network(784, 2, [256, 128]) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) print(model) fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=1) # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # helper.imshow(img) helper.imshow(images[0,:]) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) ps = ps.data.numpy().squeeze() print(ps) # Plot the image and probabilities # helper.view_classify(img.view(1, 28, 28), ps, version='Fashion') ###Output [1.1275088e-37 1.0000000e+00] ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'dogs-vs-cats/train' trans = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=trans) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'dogs-vs-cats' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset import fc_model from torch import nn, optim model = fc_model.Network(784, 10, [512, 256, 128]) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2) ###Output Epoch: 1/2.. Training Loss: 0.270.. Test Loss: 60.862.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 67.349.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 67.778.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 67.968.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 68.181.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 68.265.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 68.620.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 70.846.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 4.834.. Test Loss: 1.875.. Test Accuracy: 0.506 Epoch: 1/2.. Training Loss: 0.186.. Test Loss: 61.078.. Test Accuracy: 0.506 Epoch: 1/2.. Training Loss: 0.001.. Test Loss: 71.533.. Test Accuracy: 0.506 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 73.338.. Test Accuracy: 0.506 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 76.361.. Test Accuracy: 0.506 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 77.425.. Test Accuracy: 0.506 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomResizedCrop]) test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = transforms.Compose([transforms.Resize(255), # transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transforms) # TODO: create the ImageFolder # dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/home/cs/Downloads/Cat_Dog_data/train' # TODO: compose transforms here data_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(root=data_dir, transform=data_transforms) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '/home/cs/Downloads/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transformation = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transformation) # TODO: use the ImageFolder dataset to create the DataLoader dataloader =torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' ## TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn from torch import optim import torch.nn.functional as F import helper import fc_model # Define a transform to normalize the data #transform = transforms.Compose([transforms.ToTensor(), # transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) image, label = next(iter(trainloader)) helper.imshow(image[1,:]); # Create the network, define the criterion and optimizer model = fc_model.Network(784, 2, [1024,256,128,64,32], drop_p=0.5) model criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2) # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) # Plot the image and probabilities helper.view_classify(img.view(1, 28, 28), ps, version='Fashion') ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' datatransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=datatransforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) print(images.size()) helper.imshow(images[3], normalize=False) print(labels[3]) #0 is cat #1 is dog ###Output tensor(0) ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset import numpy as np import time from torch import nn from torch import optim import torch.nn.functional as F # TODO: Define your network architecture here input_size = 150528 hidden_sizes = [50176, 1000, 100, 10] output_size = 2 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], hidden_sizes[2]), nn.ReLU(), nn.Linear(hidden_sizes[2], hidden_sizes[3]), nn.ReLU(), nn.Linear(hidden_sizes[3], output_size)) #nn.Softmax(dim=1)) print(model) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper from torch.utils.data import DataLoader device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/Cat_Dog_data/train' data_transform = transforms.Compose([transforms.Resize(244), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=data_transform) dataloader = DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(244), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim from torchvision import datasets, transforms import helper import fc_model ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset model = fc_model.Network(224**2,2,[25000,12500,6250,3125,1024,256,128,64,32]) model criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) fc_model.train(model,trainloader,testloader,criterion,optimizer,epochs=3,print_every=40,pixels=224**2) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'assets/Cat_Dog_data/train' transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'assets/Cat_Dog_data' from torchvision import datasets, transforms # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomResizedCrop(100), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn from torch import optim import torch.nn.functional as F # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class Network(nn.Module): def __init__(self, drop_p=0.5): super().__init__() self.il = nn.Linear(100*100*3, 128) self.hl1 = nn.Linear(128, 64) self.hl2 = nn.Linear(64, 32) self.hl3 = nn.Linear(32, 10) self.dropout = nn.Dropout(p=drop_p) def forward(self, x): x = self.il(x) x = F.relu(x) x = self.dropout(x) x= self.hl1(x) x = F.relu(x) x = self.dropout(x) x = self.hl2(x) x = F.relu(x) x = self.dropout(x) x = self.hl3(x) x = F.softmax(x, dim=1) return x model = Network() # model = model.cuda() model images, labels = next(iter(trainloader)) images.resize_(images.shape[0], 1, 3*100*100) ps = model.forward(images[0]) helper.view_classify(images[0].view(3, 100, 100), ps) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr = 0.0001) def validation(model, testloader, criterion): test_loss = 0 accuracy = 0 for images, labels in testloader: images.resize_(images.shape[0], 100*100*3) output = model.forward(images) test_loss += criterion(output, labels).item() equality = (labels.data == output.max(dim=1)[1]) accuracy += equality.type(torch.FloatTensor).mean() return test_loss, accuracy epochs = 5 steps = 0 running_loss = 0 print_every = 40 for e in range(epochs): model.train() for images, labels in trainloader: steps += 1 # Flatten images into a 784 long vector images.resize_(images.size()[0], 100*100*3) optimizer.zero_grad() output = model.forward(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: # Make sure network is in eval mode for inference model.eval() # Turn off gradients for validation, saves memory and computations with torch.no_grad(): test_loss, accuracy = validation(model, testloader, criterion) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/print_every), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) running_loss = 0 # Make sure training is back on model.train() ###Output Epoch: 1/5.. Training Loss: 2.149.. Test Loss: 2.118.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 1.923.. Test Loss: 2.031.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 1.740.. Test Loss: 1.992.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 1.633.. Test Loss: 1.975.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 1.548.. Test Loss: 1.972.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 1.529.. Test Loss: 1.969.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 1.514.. Test Loss: 1.968.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 1.509.. Test Loss: 1.968.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 1.699.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 2.452.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 2.448.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 2.447.. Test Loss: 1.968.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 2.443.. Test Loss: 1.968.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 2.424.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 2.362.. Test Loss: 1.964.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 2.022.. Test Loss: 1.948.. Test Accuracy: 0.509 Epoch: 1/5.. Training Loss: 1.576.. Test Loss: 1.957.. Test Accuracy: 0.506 Epoch: 2/5.. Training Loss: 1.882.. Test Loss: 1.956.. Test Accuracy: 0.506 Epoch: 2/5.. Training Loss: 2.428.. Test Loss: 1.956.. Test Accuracy: 0.505 Epoch: 2/5.. Training Loss: 2.362.. Test Loss: 1.948.. Test Accuracy: 0.508 Epoch: 2/5.. Training Loss: 2.132.. Test Loss: 1.941.. Test Accuracy: 0.521 Epoch: 2/5.. Training Loss: 1.739.. Test Loss: 1.962.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 1.557.. Test Loss: 1.965.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 1.501.. Test Loss: 1.966.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 1.490.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 1.481.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 2.075.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 2.449.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 2.448.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 2.437.. Test Loss: 1.966.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 2.430.. Test Loss: 1.965.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 2.395.. Test Loss: 1.963.. Test Accuracy: 0.491 Epoch: 2/5.. Training Loss: 2.253.. Test Loss: 1.937.. Test Accuracy: 0.520 Epoch: 2/5.. Training Loss: 1.858.. Test Loss: 1.932.. Test Accuracy: 0.525 Epoch: 2/5.. Training Loss: 1.589.. Test Loss: 1.955.. Test Accuracy: 0.506 Epoch: 3/5.. Training Loss: 2.228.. Test Loss: 1.952.. Test Accuracy: 0.508 Epoch: 3/5.. Training Loss: 2.274.. Test Loss: 1.919.. Test Accuracy: 0.541 Epoch: 3/5.. Training Loss: 2.067.. Test Loss: 1.923.. Test Accuracy: 0.537 Epoch: 3/5.. Training Loss: 1.743.. Test Loss: 1.950.. Test Accuracy: 0.510 Epoch: 3/5.. Training Loss: 1.593.. Test Loss: 1.956.. Test Accuracy: 0.503 Epoch: 3/5.. Training Loss: 1.527.. Test Loss: 1.966.. Test Accuracy: 0.492 Epoch: 3/5.. Training Loss: 1.498.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 1.480.. Test Loss: 1.966.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 1.488.. Test Loss: 1.966.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 2.447.. Test Loss: 1.966.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 2.442.. Test Loss: 1.965.. Test Accuracy: 0.495 Epoch: 3/5.. Training Loss: 2.418.. Test Loss: 1.964.. Test Accuracy: 0.496 Epoch: 3/5.. Training Loss: 2.366.. Test Loss: 1.952.. Test Accuracy: 0.508 Epoch: 3/5.. Training Loss: 2.267.. Test Loss: 1.949.. Test Accuracy: 0.509 Epoch: 3/5.. Training Loss: 2.019.. Test Loss: 1.923.. Test Accuracy: 0.538 Epoch: 3/5.. Training Loss: 1.726.. Test Loss: 1.926.. Test Accuracy: 0.533 Epoch: 3/5.. Training Loss: 1.578.. Test Loss: 1.953.. Test Accuracy: 0.508 Epoch: 4/5.. Training Loss: 1.693.. Test Loss: 1.956.. Test Accuracy: 0.505 Epoch: 4/5.. Training Loss: 2.376.. Test Loss: 1.937.. Test Accuracy: 0.522 Epoch: 4/5.. Training Loss: 2.259.. Test Loss: 1.913.. Test Accuracy: 0.546 Epoch: 4/5.. Training Loss: 1.989.. Test Loss: 1.926.. Test Accuracy: 0.534 Epoch: 4/5.. Training Loss: 1.752.. Test Loss: 1.952.. Test Accuracy: 0.507 Epoch: 4/5.. Training Loss: 1.631.. Test Loss: 1.953.. Test Accuracy: 0.507 Epoch: 4/5.. Training Loss: 1.548.. Test Loss: 1.960.. Test Accuracy: 0.500 Epoch: 4/5.. Training Loss: 1.520.. Test Loss: 1.964.. Test Accuracy: 0.495 Epoch: 4/5.. Training Loss: 1.516.. Test Loss: 1.966.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 1.878.. Test Loss: 1.967.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 2.434.. Test Loss: 1.965.. Test Accuracy: 0.495 Epoch: 4/5.. Training Loss: 2.404.. Test Loss: 1.964.. Test Accuracy: 0.496 Epoch: 4/5.. Training Loss: 2.371.. Test Loss: 1.958.. Test Accuracy: 0.501 Epoch: 4/5.. Training Loss: 2.304.. Test Loss: 1.956.. Test Accuracy: 0.504 Epoch: 4/5.. Training Loss: 2.162.. Test Loss: 1.941.. Test Accuracy: 0.517 Epoch: 4/5.. Training Loss: 1.931.. Test Loss: 1.920.. Test Accuracy: 0.537 Epoch: 4/5.. Training Loss: 1.727.. Test Loss: 1.918.. Test Accuracy: 0.542 Epoch: 4/5.. Training Loss: 1.636.. Test Loss: 1.918.. Test Accuracy: 0.541 Epoch: 5/5.. Training Loss: 2.028.. Test Loss: 1.927.. Test Accuracy: 0.532 Epoch: 5/5.. Training Loss: 2.256.. Test Loss: 1.921.. Test Accuracy: 0.537 Epoch: 5/5.. Training Loss: 2.115.. Test Loss: 1.932.. Test Accuracy: 0.527 Epoch: 5/5.. Training Loss: 1.942.. Test Loss: 1.931.. Test Accuracy: 0.527 Epoch: 5/5.. Training Loss: 1.762.. Test Loss: 1.945.. Test Accuracy: 0.515 Epoch: 5/5.. Training Loss: 1.612.. Test Loss: 1.945.. Test Accuracy: 0.513 Epoch: 5/5.. Training Loss: 1.549.. Test Loss: 1.958.. Test Accuracy: 0.503 Epoch: 5/5.. Training Loss: 1.535.. Test Loss: 1.964.. Test Accuracy: 0.495 Epoch: 5/5.. Training Loss: 1.501.. Test Loss: 1.966.. Test Accuracy: 0.496 Epoch: 5/5.. Training Loss: 2.239.. Test Loss: 1.964.. Test Accuracy: 0.496 Epoch: 5/5.. Training Loss: 2.416.. Test Loss: 1.969.. Test Accuracy: 0.491 Epoch: 5/5.. Training Loss: 2.370.. Test Loss: 1.956.. Test Accuracy: 0.503 Epoch: 5/5.. Training Loss: 2.308.. Test Loss: 1.932.. Test Accuracy: 0.527 Epoch: 5/5.. Training Loss: 2.188.. Test Loss: 1.932.. Test Accuracy: 0.527 Epoch: 5/5.. Training Loss: 2.029.. Test Loss: 1.916.. Test Accuracy: 0.544 Epoch: 5/5.. Training Loss: 1.826.. Test Loss: 1.921.. Test Accuracy: 0.539 Epoch: 5/5.. Training Loss: 1.705.. Test Loss: 1.929.. Test Accuracy: 0.529 Epoch: 5/5.. Training Loss: 1.596.. Test Loss: 1.935.. Test Accuracy: 0.525 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code from torchvision import datasets, transforms data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transforms) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code from torchvision import transforms data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `../Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here data_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=data_transforms) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader # data_iter = iter(testloader) data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # Ztransforms = transforms.Compose([transforms.RandomRotation(30), # transforms.RandomResizedCrop(224), # transforms.RandomHorizontalFlip(), # transforms.ToTensor(),]) # transforms.Normalize([0.5, 0.5, 0.5],[0.5, 0.5, 0.5])]) Ztransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform = Ztransforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms import helper import fc_model ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code # Define image folder: inside data_dir, each class should have a subfolder, eg # path/train/dog, path/train/cat... data_dir = 'Cat_Dog_data/train' # Compose transforms: Select transformations to apply to dataset in a pipeline # ToTensor: convert into a pytorch tensor transform = transforms.Compose([transforms.Resize(125), transforms.CenterCrop(124), transforms.Grayscale(num_output_channels=3), transforms.ToTensor() ]) # Create the ImageFolder object dataset = datasets.ImageFolder(data_dir, transform=transform) # Create the DataLoader from ImageFolder: DataLoader is an iterator of the dataset dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True) # Obtain batches of images (& labels) from dataloader iterator images, labels = next(iter(dataloader)) # Visualize one image from batch + label/class img_id = 0 class_id = labels[img_id].item() print(dataset.classes[class_id]) helper.imshow(images[img_id], normalize=False) ###Output dog ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # Training data with data augmentation train_transforms = transforms.Compose([#transforms.RandomRotation(30), #transforms.RandomHorizontalFlip(), transforms.Resize(30), transforms.CenterCrop(28), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Test data without augmentation test_transforms = transforms.Compose([ transforms.Resize(30), transforms.CenterCrop(28), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle = True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32, shuffle = True) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs do input_size = 3*28*28 output_size = 2 hidden_sizes = [512, 256, 128] model = fc_model.Network(input_size, output_size, hidden_sizes) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) # TRAIN fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=1) ###Output Epoch: 1/1.. Training Loss: 0.706.. Test Loss: 0.683.. Test Accuracy: 0.563 Epoch: 1/1.. Training Loss: 0.711.. Test Loss: 0.682.. Test Accuracy: 0.552 Epoch: 1/1.. Training Loss: 0.691.. Test Loss: 0.681.. Test Accuracy: 0.547 Epoch: 1/1.. Training Loss: 0.699.. Test Loss: 0.686.. Test Accuracy: 0.549 Epoch: 1/1.. Training Loss: 0.684.. Test Loss: 0.680.. Test Accuracy: 0.563 Epoch: 1/1.. Training Loss: 0.697.. Test Loss: 0.676.. Test Accuracy: 0.574 Epoch: 1/1.. Training Loss: 0.682.. Test Loss: 0.677.. Test Accuracy: 0.570 Epoch: 1/1.. Training Loss: 0.683.. Test Loss: 0.672.. Test Accuracy: 0.592 Epoch: 1/1.. Training Loss: 0.676.. Test Loss: 0.667.. Test Accuracy: 0.600 Epoch: 1/1.. Training Loss: 0.684.. Test Loss: 0.670.. Test Accuracy: 0.612 Epoch: 1/1.. Training Loss: 0.686.. Test Loss: 0.675.. Test Accuracy: 0.600 Epoch: 1/1.. Training Loss: 0.679.. Test Loss: 0.664.. Test Accuracy: 0.611 Epoch: 1/1.. Training Loss: 0.688.. Test Loss: 0.676.. Test Accuracy: 0.582 Epoch: 1/1.. Training Loss: 0.674.. Test Loss: 0.664.. Test Accuracy: 0.613 Epoch: 1/1.. Training Loss: 0.671.. Test Loss: 0.660.. Test Accuracy: 0.605 Epoch: 1/1.. Training Loss: 0.676.. Test Loss: 0.660.. Test Accuracy: 0.591 Epoch: 1/1.. Training Loss: 0.673.. Test Loss: 0.674.. Test Accuracy: 0.570 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `../Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../Cat_Dog_data/train' # TODO: compose transforms here full_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=full_transforms) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '../Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose( [ transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ] ) test_transforms = transforms.Compose( [ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ] ) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms from torch import nn from torch import optim import torch.nn.functional as F import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' data_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(220), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, data_transforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], title='Dog' if labels[0] == 1 else 'Cat', normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, ], [0.5, ]) ]) test_transforms = transforms.Compose([ transforms.Resize(200), transforms.CenterCrop(100), transforms.ToTensor(), transforms.Normalize([0.5, ], [0.5, ]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32, shuffle=True) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.cn1 = nn.Conv2d(3, 10, kernel_size=5) self.lc1 = nn.Linear(92160, 256) self.lc2 = nn.Linear(256, 2) def forward(self, x): x = self.cn1(x) x = F.relu(x) x = x.view(x.size(0), -1) x = self.lc1(x) x = F.relu(x) x = self.lc2(x) return x model = Model() model criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) data_iter = iter(trainloader) ###Output _____no_output_____ ###Markdown Try a single forward and backward pass. ###Code images, labels = next(data_iter) helper.imshow(images[0], title='Dog' if labels[0] == 1 else 'Cat', normalize=True) optimizer.zero_grad() output = model.forward(images) loss = criterion(output, labels) loss.backward() optimizer.step() print('Loss: {}'.format(loss)) data_iter = iter(trainloader) images, labels = next(data_iter) optimizer.zero_grad() logits = model.forward(images) prob = F.sigmoid(logits) print('Probabilities: {}'.format(prob)) print('Labelse: {}'.format(labels)) criterion(output, labels) F.one_hot(labels, 2) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms import helper data_dir = '/home/ec2-user/SageMaker/DL_PyTorch/Cat_Dog_data' transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code from torchvision import datasets, transforms data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(100), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____
CoursMagistral_1.ipynb
###Markdown Table of Contents 1&nbsp;&nbsp;ALGO1 : Introduction à l'algorithmique2&nbsp;&nbsp;Cours Magistral 12.1&nbsp;&nbsp;Listes simplement chaînées2.1.1&nbsp;&nbsp;pop/push pour une structure de pile (FILO)2.1.2&nbsp;&nbsp;add/remove pour une structure de file (FIFO)2.2&nbsp;&nbsp;Listes doublement chaînées2.2.1&nbsp;&nbsp;Exemple2.3&nbsp;&nbsp;Implémenter une file d'attente avec deux piles2.4&nbsp;&nbsp;File de priorité2.5&nbsp;&nbsp;Implémentation d'un tas binaire2.5.1&nbsp;&nbsp;Naïve : tableau trié !2.5.2&nbsp;&nbsp;Intelligente : tas binaire min équilibré2.6&nbsp;&nbsp;Tri par tas2.6.1&nbsp;&nbsp;Remarques2.6.2&nbsp;&nbsp;Tri par tas2.7&nbsp;&nbsp;Test numérique de l'efficacité du tri par tas2.8&nbsp;&nbsp;Évaluation numérique de la complexité des opérations du tas binaire2.9&nbsp;&nbsp;Conclusion [ALGO1 : Introduction à l'algorithmique](https://perso.crans.org/besson/teach/info1_algo1_2019/)- [Page du cours](https://perso.crans.org/besson/teach/info1_algo1_2019/) : https://perso.crans.org/besson/teach/info1_algo1_2019/- Magistère d'Informatique de Rennes - ENS Rennes - Année 2019/2020- Intervenants : + Cours : [Lilian Besson](https://perso.crans.org/besson/) + Travaux dirigés : [Raphaël Truffet](http://perso.eleves.ens-rennes.fr/people/Raphael.Truffet/)- Références : + [Open Data Structures](http://opendatastructures.org/ods-python.pdf) Cours Magistral 1 Listes simplement chaînées![figures/CM1_SimplyLinkedList.png](figures/CM1_SimplyLinkedList.png) On utilise une petite classe qui va encapsuler la donnée courante, et le pointeur vers la liste suivante. ###Code class ListNode: def __init__(self, data, link=None): self.data = data self.link = link def __str__(self): return "[{}|.-]->{}".format(str(self.data), "" if self.link is None else str(self.link)) example_node = ListNode(0) print(example_node) example_node2 = ListNode(1, link=example_node) print(example_node2) ###Output [0|.-]-> [1|.-]->[0|.-]-> ###Markdown On peut parcourir $i$ fois cette structure linéaire : ###Code def traverse(one_node, i): assert i >= 0 if i == 0: return one_node.data else: return traverse(one_node.link, i-1) [ traverse(example_node, 0) ] # traverse(example_node, 1) [ traverse(example_node2, 1), traverse(example_node2, 0) ] ###Output _____no_output_____ ###Markdown On implémente les opérations push/pop et add/remove : ###Code class LinkedList: def __init__(self): self._head = None self._tail = None self._length = 0 def __len__(self): return self._length def isempty(self): return len(self) == 0 # Methods push/pop for Stack (FIFO) data structure def _addfirst(self, item): self._head = ListNode(item, self._head) # if it has only one element, we make it loop if self._tail is None: self._tail = self._head # but the structure knows it has only element: length = 1 self._length += 1 def push(self, item): """ Insert a new element as the new head, in O(1) time.""" self._addfirst(item) def _removefirst(self): item = self._head.data # get the current head data self._head = self._head.link # compress the head if self._head is None: # if link was None, then list is now empty self._tail = None self._length -= 1 # remove one element return item def pop(self): """ Get and remove the head, in O(1) time.""" return self._removefirst() # Methods add/remove for Queue (FILO) data structure def _addlast(self, item): if self._head is None: # if list is empty, just add at the beginning self._addfirst(item) else: # or create new element, and change tail self._tail.link = ListNode(item) self._tail = self._tail.link self._length += 1 def add(self, item): """ Insert a new element at the end of the list, in O(n) time.""" self._addlast(item) remove = pop def removelast(self): if self._head is self._tail: return self._removefirst() else: currentnode = self._head while currentnode.link is not self._tail: currentnode = currentnode.link item = self._tail.data self._tail = currentnode self._tail.link = None self._length -= 1 return item # Access to i-th element, in O(i) def __getitem__(self, index): if not (0 <= index < len(self)): raise IndexError return traverse(self._head, index) def items(self): n = len(self) return [ self[i] for i in range(len(self)) ] # Method to print the list def __str__(self) -> str: if self.isempty(): return "[]" return str(self._head) ###Output _____no_output_____ ###Markdown Deux exemples, que l'on visualise encore mieux sur [PythonTutor.com](http://pythontutor.com/live.htmlmode=edit). `pop`/`push` pour une structure de pile (FILO) ###Code example_list = LinkedList() print(example_list) example_list.push(0) print(example_list) example_list.push(1) print(example_list) example_list.push(2) print(example_list) example_list.push(3) print(example_list) print(example_list.items()) for i in range(len(example_list)): print("{}th value is = {}".format(i, example_list[i])) example_list.pop() print(example_list) example_list.pop() print(example_list) example_list.pop() print(example_list) example_list.pop() print(example_list) ###Output [] [0|.-]-> [1|.-]->[0|.-]-> [2|.-]->[1|.-]->[0|.-]-> [3|.-]->[2|.-]->[1|.-]->[0|.-]-> [3, 2, 1, 0] 0th value is = 3 1th value is = 2 2th value is = 1 3th value is = 0 ###Markdown `add`/`remove` pour une structure de file (FIFO) ###Code example_list = LinkedList() print(example_list) example_list.add(0) print(example_list) example_list.add(1) print(example_list) example_list.add(2) print(example_list) example_list.add(3) print(example_list) print(example_list.items()) for i in range(len(example_list)): print("{}th value is = {}".format(i, example_list[i])) example_list.remove() print(example_list) example_list.remove() print(example_list) example_list.remove() print(example_list) example_list.remove() print(example_list) ###Output [] [0|.-]-> [0|.-]->[1|.-]-> [0|.-]->[1|.-]->[2|.-]-> [0|.-]->[1|.-]->[2|.-]->[3|.-]-> [0, 1, 2, 3] 0th value is = 0 1th value is = 1 2th value is = 2 3th value is = 3 ###Markdown Listes doublement chaînées![figures/CM1_DoublyLinkedList.png](figures/CM1_DoublyLinkedList.png) On utilise une petite classe qui va encapsuler la donnée courante, et les deux pointeurs vers les listes suivante et précédente. ###Code class ListNodeDoublyLinked: def __init__(self, data, prev = None, link = None): self.data = data self.prev = prev self.link = link if prev is not None: self.prev.link = self if link is not None: self.link.prev = self def __str__(self): return "[{}]{}".format(str(self.data), "" if self.link is None else "<->{}".format(str(self.link))) class DoublyLinkedList: def __init__(self): self._head = None self._tail = None self._length = 0 def isempty(self): return self._length == 0 def __len__(self): return self._length # Add an element, in O(1) def _addbetween(self, item, before, after): node = ListNodeDoublyLinked(item, before, after) if after is self._head: self._head = node if before is self._tail: self._tail = node self._length += 1 def addfirst(self, item): """ Insert a new element as the beginning of the list, in O(1) time.""" self._addbetween(item, None, self._head) def addlast(self, item): """ Insert a new element as the end of the list, in O(1) time.""" self._addbetween(item, self._tail, None) # Remove an element, in O(1) def _remove(self, node): before, after = node.prev, node.link if node is self._head: self._head = after else: before.link = after if node is self._tail: self._tail = before else: after.prev = before self._length -= 1 return node.data def removefirst(self): """ Remove and return the beginning of the list, in O(1) time.""" return self._remove(self._head) def removelast(self): """ Remove and return the end of the list, in O(1) time.""" return self._remove(self._tail) # Access to i-th element, in O(i) def __iadd__(self, other): if other._head is None: return if self._head is None: self._head = other._head else: self._tail.link = other._head other._head.prev = self._tail self._tail = other._tail self._length = self._length + other._length # Clean up the other list. other.__init__() return self # Access to i-th element, in O(i) def __getitem__(self, index): if not (0 <= index < len(self)): raise IndexError return traverse(self._head, index) def items(self): n = len(self) return [ self[i] for i in range(len(self)) ] # Method to print the list def __str__(self) -> str: if self.isempty(): return "[]" return str(self._head) ###Output _____no_output_____ ###Markdown Un exemple, que l'on visualise encore mieux sur [PythonTutor.com](http://pythontutor.com/live.htmlmode=edit). Exemple ###Code example_list = DoublyLinkedList() print(example_list) example_list.addfirst(0) print(example_list) example_list.addfirst(1) print(example_list) example_list.addfirst(2) print(example_list) example_list.addlast(100) print(example_list) example_list.addlast(101) print(example_list) example_list.addlast(102) print(example_list) print(list(example_list)) example_list.removefirst() print(example_list) example_list.removelast() print(example_list) example_list.removefirst() print(example_list) example_list.removelast() print(example_list) example_list.removefirst() print(example_list) example_list.removelast() print(example_list) ###Output [] [0] [1]<->[0] [2]<->[1]<->[0] [2]<->[1]<->[0]<->[100] [2]<->[1]<->[0]<->[100]<->[101] [2]<->[1]<->[0]<->[100]<->[101]<->[102] [2, 1, 0, 100, 101, 102] ###Markdown Implémenter une file d'attente avec deux piles- On va utiliser deux piles (des `list` de Python) ###Code # https://github.com/jilljenn/tryalgo/blob/master/tryalgo/our_queue.py class Queue: """A FIFO queue - Complexity: + all operators in amortized constant time, + except __str__ which is linear """ def __init__(self): self.in_stack = [ ] # tail self.out_stack = [ ] # head def __len__(self): return len(self.in_stack) + len(self.out_stack) def push(self, obj): self.in_stack.append(obj) def pop(self): if not self.out_stack: # head is empty self.out_stack = self.in_stack[::-1] self.in_stack = [] return self.out_stack.pop() def __str__(self): return str(self.out_stack[::-1] + self.in_stack) queue = Queue() queue.push(0) print(queue) queue.push(1) print(queue) queue.push(2) print(queue) queue.push(3) print(queue) queue.pop() print(queue) queue.pop() print(queue) queue.pop() print(queue) ###Output [0] [0, 1] [0, 1, 2] [0, 1, 2, 3] ###Markdown File de prioritéA propos… Implémentation d'un tas binaire Naïve : tableau trié !On conserve le tableau trié en insérant chaque nouvel élément par une insertion à sa bonne position (avec des inversions locales), comme dans le tri par tas. ###Code def swap(array, i, j): array[i], array[j] = array[j], array[i] class OurNaiveHeap: """ min naive heap * heap: is the actual heap, containing the sorted value * n: size of the heap Complexity: init O(n^2), len O(1), other operations O(n) in all cases """ def __init__(self, items=None): self.heap = [] # index 0 will be ignored if items is not None: for x in items: self.push(x) def __len__(self): return len(self.heap) def push(self, x): """Insert new element x in the heap.""" # add a new element self.heap.append(x) # then insert it, from the end, to its correct location position = len(self) - 1 while position > 0 and self.heap[position - 1] > self.heap[position]: swap(self.heap, position - 1, position) position -= 1 def pop(self): """Remove and return smallest element""" # move heap[0] to heap[n] and copy heap[1:n] to heap[0:n-1] for position in range(len(self) - 1): swap(self.heap, position, position + 1) smallest_element = self.heap.pop() # remove last element return smallest_element ###Output _____no_output_____ ###Markdown Intelligente : tas binaire min équilibré ###Code class OurHeap: """ min heap * heap: is the actual heap, heap[1] = index of the smallest element * rank: inverse of heap with rank[x]=i iff heap[i]=x * n: size of the heap :complexity: init O(n log n), len O(1), other operations O(log n) in expectation and O(n) in worst case, due to the usage of a dictionary """ def __init__(self, items=None): self.heap = [None] # index 0 will be ignored if items is not None: for x in items: self.push(x) def __len__(self): return len(self.heap) - 1 def push(self, x): """Insert new element x in the heap.""" i = len(self.heap) self.heap.append(x) # add a new leaf self.up(i) # maintain heap order def pop(self): """Remove and return smallest element""" root = self.heap[1] x = self.heap.pop() # remove last leaf if self: # if heap is not empty self.heap[1] = x # put last leaf to root self.down(1) # maintain heap order return root def up(self, i): """The value of heap[i] has decreased. Maintain heap invariant.""" x = self.heap[i] while i > 1 and x < self.heap[i // 2]: self.heap[i] = self.heap[i // 2] i //= 2 self.heap[i] = x # insertion index found def down(self, i): """the value of heap[i] has increased. Maintain heap invariant.""" x = self.heap[i] n = len(self.heap) while True: left = 2 * i # climb down the tree right = left + 1 if (right < n and self.heap[right] < x and self.heap[right] < self.heap[left]): self.heap[i] = self.heap[right] i = right elif left < n and self.heap[left] < x: self.heap[i] = self.heap[left] i = left else: self.heap[i] = x # insertion index found return ###Output _____no_output_____ ###Markdown Tri par tasDès que l'on a une implémentation d'un tas (min), on peut facilement trier un tableau `T` de la façon suivante :- Entrée : un tableau `T` de taille `n`- Créer tas `mon_tas`- Pour chaque valeur `T[i]` dans le tableau `T` : + entasser `T[i]` dans `mon_tas`- Créer un tableau `T_trie` de même taille que `T` (`n`)- Initialiser `i = 0`- Tant que `mon_tas` n'est pas vide : + extraire le minimum du tas : `nouveau_min_du_tas <- extraireMin(mon_tas)` + placer ce minimum à la `i`ème position dans le nouveau tableau : `T_trie[i] = nouvea_min_du_tas` + `i += 1`- Sortie : Le tableau `T_trie` est le tableau `T` trié par ordre croissant. Remarques- L'avantage du tri par tas est que l'on peut aussi effectuer toutes ces opérations *en place* (i.e., en utilisant le tableau `T` et pas de mémoire supplémentaire).- On utilise un tri max pour trier en ordre décroissant, ou alors on renverse juste le tableau `T_trie` à la fin. Tri par tasL'algorithme est indépendent de la structure de tas que l'on utilise ! ###Code def heapSort(array, heapStructure=OurHeap): n = len(array) heap = heapStructure() for i in range(n): heap.push(array[i]) sorted_array = [ None ] * n # taille n i = 0 while heap: # while not empty sorted_array[i] = heap.pop() i += 1 return sorted_array def insertionSort(array): return heapSort(array, heapStructure=OurNaiveHeap) example_array = [10, 9, 19] sorted(example_array) heapSort(example_array) insertionSort(example_array) example_array = list(range(2019)) + list(range(2019)) # twice the numbers from 0 to 2018 import random random.shuffle(example_array) %timeit sorted(example_array) %timeit heapSort(example_array) %timeit insertionSort(example_array) ###Output 989 µs ± 72.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 23.8 ms ± 1.33 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 3.85 s ± 577 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ###Markdown Test numérique de l'efficacité du tri par tas ###Code import matplotlib as mpl mpl.rcParams['figure.figsize'] = (10, 7) mpl.rcParams['figure.dpi'] = 120 import seaborn as sns sns.set(context="notebook", style="whitegrid", palette="hls", font="sans-serif", font_scale=1.1) import matplotlib.pyplot as plt import random random.seed(1234) ###Output _____no_output_____ ###Markdown On va générer des tableaux aléatoires : ###Code def random_array_of_int(max_int=10000, length=1000): return [ random.randint(0, max_int) for _ in range(length) ] random_array_of_int(max_int=20, length=10) ###Output _____no_output_____ ###Markdown On peut facilement mesurer le temps d'exécution d'une fontion de tri, sur des tableaux aléatoires ###Code import timeit try: from tqdm import tqdm_notebook except ImportError: def tqdm_notebook(iterator, *args, **kwargs): return iterator def time_a_sort_function(sort_function, sort_function_name, values_n, number=1000, max_int=1000000): return [ timeit.timeit("{}(random_array_of_int(max_int={}, length={}))".format(sort_function_name, max_int, n), globals={ 'random_array_of_int': random_array_of_int, sort_function_name: sort_function, }, number=number, ) for n in tqdm_notebook(values_n) ] ###Output _____no_output_____ ###Markdown Comparons notre tri par tas avec la fonction `sorted()` de Python : ###Code small_values_n = [10, 100, 500] + list(range(1000, 5000, 1000)) big_values_n = list(range(6000, 100000, 4000)) # very_big_values_n = list(range(100000, 5000000, 100000)) values_n = small_values_n + big_values_n #+ very_big_values_n times_sorted = time_a_sort_function(sorted, "sorted", values_n, number=100) times_heapSort = time_a_sort_function(heapSort, "heapSort", values_n, number=100) times_insertionSort = time_a_sort_function(insertionSort, "insertionSort", small_values_n, number=20) plt.figure() plt.xlabel("Taille du tableau d'entrée $n$") plt.ylabel("Temps en secondes") plt.title("Comparaison des tris builtin, par tas ou par insertion") plt.plot(values_n, times_sorted, "d-", label="Builtin", lw=5, ms=12) plt.plot(values_n, times_heapSort, "o-", label="Par tas", lw=5, ms=12) plt.plot(small_values_n, times_insertionSort, ">-", label="Par insertion", lw=5, ms=12) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Attention : Trier des nombres tout petit peut être effectué en temps linéaire (*bin sort*) : ###Code %timeit sorted(random_array_of_int(max_int=10, length=100)) %timeit sorted(random_array_of_int(max_int=10, length=1000)) %timeit sorted(random_array_of_int(max_int=10, length=10000)) %timeit sorted(random_array_of_int(max_int=1000, length=100)) %timeit sorted(random_array_of_int(max_int=1000, length=1000)) %timeit sorted(random_array_of_int(max_int=1000, length=10000)) ###Output 142 µs ± 16.7 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) 1.32 ms ± 74 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 12.8 ms ± 307 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ###Markdown Évaluation numérique de la complexité des opérations du tas binaireOn peut évaluer, sur des exemples, la complexité des opérations d'ajout et d'extraction du maximum que l'on a implémenté dans notre structure de tas binaire.On a montré qu'elles sont toutes les deux en $\mathcal{O}(\log(n))$, mais peut-on le vérifier empiriquement ? ###Code import numpy as np def times_push_and_pop(values_n, number1=100, number2=100, max_int=1000_000): # create a random array for each value of n times_push = np.array([ np.mean([ timeit.timeit( "heap.push(random.randint(-{}, {}))".format(max_int, max_int), globals={ 'random_array_of_int': random_array_of_int, 'heap': heap, 'random': random, }, number=number1, ) / number1 for _ in range(number2) for heap in [ OurHeap(random_array_of_int(max_int=max_int, length=n)) ] ]) for n in tqdm_notebook(values_n, desc="push") ]) times_push_and_pop = np.array([ np.mean([ timeit.timeit( "heap.push(random.randint(-{}, {})); heap.pop()".format(max_int, max_int), globals={ 'random_array_of_int': random_array_of_int, 'heap': heap, 'random': random, }, number=number1, ) / number1 for _ in range(number2) for heap in [ OurHeap(random_array_of_int(max_int=max_int, length=n)) ] ]) for n in tqdm_notebook(values_n, desc="push & pop") ]) times_pop = times_push_and_pop - times_push return times_push, times_pop times_push_and_pop([10, 100, 1000], number1=100, number2=1000) def visualisation_temps_push_and_pop(values_n, **kwargs): times_push, times_pop = times_push_and_pop(values_n, **kwargs) plt.figure() plt.xlabel("Taille du tableau d'entrée $n$") plt.ylabel("Temps en micro-secondes") plt.title("Temps des opérations ajouterMax et retirerMax") plt.plot(values_n, 1e6 * times_push, "d-", label="ajouterMax", lw=5, ms=12) plt.plot(values_n, 1e6 * times_pop, "o-", label="retirerMax", lw=5, ms=12) plt.legend() plt.show() visualisation_temps_push_and_pop( [ 100, 500, 1000, 2000, 3000, 4000, 5000, 10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, #110000, 120000, 130000, 140000, 150000, 160000, 170000, 180000, 190000, 200000, #300000, 400000, 500000, 600000, 700000, 800000, 900000, #1000000, 2000000, 3000000, 4000000, 5000000, 6000000, 7000000, 8000000, 9000000, ], number1=100, number2=1000, ) visualisation_temps_push_and_pop( [ 100, 500, 1000, 2000, 3000, 4000, 5000, 10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 110000, 120000, 130000, 140000, 150000, 160000, 170000, 180000, 190000, 200000, 300000, 400000, 500000, 600000, 700000, 800000, 900000, #1000000, 2000000, 3000000, 4000000, 5000000, 6000000, 7000000, 8000000, 9000000, ], number1=20, number2=100, ) ###Output _____no_output_____
notebooks/Splitter with Interactive Transcription 002.ipynb
###Markdown Load language model ###Code C = Cfg('NIST', 16000, 'pashto', 'build') model = load_pretrained_model(C, 1) ###Output searching save/nemo_pashto/*.ckpt ###Markdown Start with BUILD to visualize and test transcriptions ###Code if __name__ == '__main__': with Pool(16) as pool: recordings = RecordingCorpus(C, pool) splits=SplitCorpus.transcript_split(C, recordings) artifact=splits.artifacts[10] artifact.display() pred=transcribe(C, model, artifact.source.value) pred pred==artifact.target.value ###Output _____no_output_____ ###Markdown Move onto DEV to visualize, test and refine splitter ###Code C = Cfg('NIST', 16000, 'pashto', 'dev') if __name__ == '__main__': with Pool(16) as pool: recordings = RecordingCorpus(C, pool) artifact=recordings.artifacts[0] foo=artifact.source.n_samples audio=artifact.source.value def transcribe(C, model, audio): cuts=[0] preds=[] while True: size=audio.shape[0] print("size", size) if size == 0: break max_duration=6 max_samples=int(max_duration*C.sample_rate) min_samples=int(0.2*C.sample_rate) if size > max_samples: for cutoff in np.linspace(-80,-10,140): T=audio.shape[0]/C.sample_rate S = librosa.feature.melspectrogram(y=audio, sr=C.sample_rate, n_mels=64, fmax=8000) S_dB = librosa.power_to_db(S, ref=np.max) s_dB_mean=np.mean(S_dB,axis=0) speech_q=(s_dB_mean>cutoff) silences=T*collect_false(speech_q)/len(speech_q) # print(f'cutoff {cutoff} #silences {len(silences)}') S2=[(x,y) for x,y in silences if max_duration >= y > 0.001] if len(S2): break cutoff += 1 if cutoff > -18: plt.figure(figsize=(60,4)) plt.plot(s_dB_mean) plt.show() plt.figure(figsize=(60,4)) plt.plot(audio); raise ValueError('couldnt split clip') S3=int(S2[-1][-1]*C.sample_rate) else: S3=size clip=audio[0:S3] pred=transcribe(C, model, clip) cuts.append(S3) preds.append(pred) if pred != '': print(f"sample size in seconds {S3/C.sample_rate} pred {pred} :: {unidecode(pred)}") # play(clip) audio=audio[S3:] if audio.shape[0] < min_samples: break times=np.cumsum(cuts)/C.sample_rate transcript=[(times[i], times[i+1], preds[i]) for i in range(len(preds)) if preds[i]] return transcript import pandas as pd pd.DataFrame(transcript, columns=['start', 'end', 'pred']) ###Output size 9600960 [NeMo I 2020-10-21 20:51:18 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:18 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.5359375 pred وععلیک السلام :: w``lykh lslm size 9512385 [NeMo I 2020-10-21 20:51:19 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:19 collections:174] 0 files were filtered totalling 0.00 hours size 9511874 [NeMo I 2020-10-21 20:51:20 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:20 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 2.7839375 pred ښه هغکه دار يکټم پ وکرام :: Sh hGkhh dr ykhtm p wkhrm size 9467331 [NeMo I 2020-10-21 20:51:21 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:21 collections:174] 0 files were filtered totalling 0.00 hours size 9380292 [NeMo I 2020-10-21 20:51:21 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:21 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.4159375 pred ښښه ښه هغه خوصصحد ه تنګه :: SSh Sh hGh khwSSHd h tnKh size 9309637 [NeMo I 2020-10-21 20:51:22 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:22 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 1.9519375 pred ه :: h size 9278406 [NeMo I 2020-10-21 20:51:23 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:23 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.9519375 pred زه هاو کور کښې یم :: zh hw khwr khS ym size 9183175 [NeMo I 2020-10-21 20:51:23 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:23 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 1.6959375 pred هو :: hw size 9156040 [NeMo I 2020-10-21 20:51:24 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:24 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.5439375 pred هو هغه خو ښ تم شې وو پړا ځېپه خته ه ش بس بیا افغغین که څوس ګاروم :: hw hGh khw S tm sh ww pR 'hph khth h sh bs by fGGyn khh Hws Krwm size 9083337 [NeMo I 2020-10-21 20:51:24 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:24 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.0879375 pred وس مینهکه :: ws mynhkhh size 9001930 [NeMo I 2020-10-21 20:51:25 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:25 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.0639375 pred اوس کویې یم وسس ګار مسېنال :: ws khwy ym wss Kr msnl size 8936907 [NeMo I 2020-10-21 20:51:25 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:25 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.1519375 pred و :: w size 8854476 [NeMo I 2020-10-21 20:51:26 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:26 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.5999375 pred او هو :: w hw size 8764877 [NeMo I 2020-10-21 20:51:26 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:26 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 1.6959375 pred ه :: h size 8737742 [NeMo I 2020-10-21 20:51:27 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:27 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.6079375 pred ښه :: Sh size 8664015 [NeMo I 2020-10-21 20:51:27 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:27 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.0959375 pred ښه دي کر ه ې راک ه :: Sh dy khr h rkh h size 8598480 [NeMo I 2020-10-21 20:51:28 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:28 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.3439375 pred ولي ن شته غوسېهل :: wly n shth Gwshl size 8512977 [NeMo I 2020-10-21 20:51:28 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:28 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.1199375 pred هو اوس ې په غوېېږ راوړو :: hw ws ph GwR rwRw size 8431058 [NeMo I 2020-10-21 20:51:29 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:29 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.8639375 pred پوس وې پهغېی کې را وړې دل ته مړه :: pws w phGy kh r wR dl th mRh size 8353235 [NeMo I 2020-10-21 20:51:29 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:29 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.7919375 pred نه ځیمې ده داولله نم وی :: nh 'hym dh dwllh nm wy size 8260564 [NeMo I 2020-10-21 20:51:29 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:29 collections:174] 0 files were filtered totalling 0.00 hours size 8179157 [NeMo I 2020-10-21 20:51:30 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:30 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.6959375 pred نو بس کال ته خابه کېږي :: nw bs khl th khbh khRy size 8088022 [NeMo I 2020-10-21 20:51:30 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:30 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.7999375 pred هو نو بس ښه دي پاکړه شوی ی :: hw nw bs Sh dy pkhRh shwy y size 8011223 [NeMo I 2020-10-21 20:51:31 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:31 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.3839375 pred ښه ته څنهګه شتاتا پو ووروګ رام نږره ورم نه څنګه دلي :: Sh th HnhKh shtt pw wwrwK rm nRrh wrm nh HnKh dly size 7941080 [NeMo I 2020-10-21 20:51:31 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:31 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 4.8639375 pred ه :: h size 7863257 [NeMo I 2020-10-21 20:51:31 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:31 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.8239375 pred ښه ښه ښځه د م بارک کشه :: Sh Sh S'hh d m brkh khshh size 7770074 [NeMo I 2020-10-21 20:51:32 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:32 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.6639375 pred د دی دسکاله مړه ده ښه :: d dy dskhlh mRh dh Sh size 7679451 [NeMo I 2020-10-21 20:51:32 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:32 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 1.9519375 pred وګ :: wK size 7648220 [NeMo I 2020-10-21 20:51:33 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:33 collections:174] 0 files were filtered totalling 0.00 hours size 7598557 [NeMo I 2020-10-21 20:51:33 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:33 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 3.6159375 pred هو زه و دغلې م دغه نه :: hw zh w dGl m dGh nh size 7540702 [NeMo I 2020-10-21 20:51:34 collections:173] Dataset loaded with 1 files totalling 27.78 hours [NeMo I 2020-10-21 20:51:34 collections:174] 0 files were filtered totalling 0.00 hours sample size in seconds 5.4399375 pred دغه نمې څکاوزونو ره ول وو وکار نه :: dGh nm Hkhwzwnw rh wl ww wkhr nh size 7453663
examples/tutorials/06_Introduction_to_Graph_Convolutions.ipynb
###Markdown Tutorial Part 6: Introduction to Graph ConvolutionsIn this tutorial we will learn more about "graph convolutions." These are one of the most powerful deep learning tools for working with molecular data. The reason for this is that molecules can be naturally viewed as graphs.![Molecular Graph](https://github.com/deepchem/deepchem/blob/master/examples/tutorials/basic_graphs.gif?raw=1)Note how standard chemical diagrams of the sort we're used to from high school lend themselves naturally to visualizing molecules as graphs. In the remainder of this tutorial, we'll dig into this relationship in significantly more detail. This will let us get a deeper understanding of how these systems work. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/04_Introduction_to_Graph_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine. ###Code !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem ###Output _____no_output_____ ###Markdown What are Graph Convolutions?Consider a standard convolutional neural network (CNN) of the sort commonly used to process images. The input is a grid of pixels. There is a vector of data values for each pixel, for example the red, green, and blue color channels. The data passes through a series of convolutional layers. Each layer combines the data from a pixel and its neighbors to produce a new data vector for the pixel. Early layers detect small scale local patterns, while later layers detect larger, more abstract patterns. Often the convolutional layers alternate with pooling layers that perform some operation such as max or min over local regions.Graph convolutions are similar, but they operate on a graph. They begin with a data vector for each node of the graph (for example, the chemical properties of the atom that node represents). Convolutional and pooling layers combine information from connected nodes (for example, atoms that are bonded to each other) to produce a new data vector for each node. Training a GraphConvModelLet's use the MoleculeNet suite to load the Tox21 dataset. To featurize the data in a way that graph convolutional networks can use, we set the featurizer option to `'GraphConv'`. The MoleculeNet call returns a training set, a validation set, and a test set for us to use. It also returns `tasks`, a list of the task names, and `transformers`, a list of data transformations that were applied to preprocess the dataset. (Most deep networks are quite finicky and require a set of data transformations to ensure that training proceeds stably.) ###Code import deepchem as dc tasks, datasets, transformers = dc.molnet.load_tox21(featurizer='GraphConv') train_dataset, valid_dataset, test_dataset = datasets ###Output _____no_output_____ ###Markdown Let's now train a graph convolutional network on this dataset. DeepChem has the class `GraphConvModel` that wraps a standard graph convolutional architecture underneath the hood for user convenience. Let's instantiate an object of this class and train it on our dataset. ###Code n_tasks = len(tasks) model = dc.models.GraphConvModel(n_tasks, mode='classification') model.fit(train_dataset, nb_epoch=50) ###Output _____no_output_____ ###Markdown Let's try to evaluate the performance of the model we've trained. For this, we need to define a metric, a measure of model performance. `dc.metrics` holds a collection of metrics already. For this dataset, it is standard to use the ROC-AUC score, the area under the receiver operating characteristic curve (which measures the tradeoff between precision and recall). Luckily, the ROC-AUC score is already available in DeepChem. To measure the performance of the model under this metric, we can use the convenience function `model.evaluate()`. ###Code metric = dc.metrics.Metric(dc.metrics.roc_auc_score) print('Training set score:', model.evaluate(train_dataset, [metric], transformers)) print('Test set score:', model.evaluate(test_dataset, [metric], transformers)) ###Output Training set score: {'roc_auc_score': 0.96959686893055} Test set score: {'roc_auc_score': 0.795793783300876} ###Markdown The results are pretty good, and `GraphConvModel` is very easy to use. But what's going on under the hood? Could we build GraphConvModel ourselves? Of course! DeepChem provides Keras layers for all the calculations involved in a graph convolution. We are going to apply the following layers from DeepChem.- `GraphConv` layer: This layer implements the graph convolution. The graph convolution combines per-node feature vectures in a nonlinear fashion with the feature vectors for neighboring nodes. This "blends" information in local neighborhoods of a graph.- `GraphPool` layer: This layer does a max-pooling over the feature vectors of atoms in a neighborhood. You can think of this layer as analogous to a max-pooling layer for 2D convolutions but which operates on graphs instead. - `GraphGather`: Many graph convolutional networks manipulate feature vectors per graph-node. For a molecule for example, each node might represent an atom, and the network would manipulate atomic feature vectors that summarize the local chemistry of the atom. However, at the end of the application, we will likely want to work with a molecule level feature representation. This layer creates a graph level feature vector by combining all the node-level feature vectors.Apart from this we are going to apply standard neural network layers such as [Dense](https://keras.io/api/layers/core_layers/dense/), [BatchNormalization](https://keras.io/api/layers/normalization_layers/batch_normalization/) and [Softmax](https://keras.io/api/layers/activation_layers/softmax/) layer. ###Code from deepchem.models.layers import GraphConv, GraphPool, GraphGather import tensorflow as tf import tensorflow.keras.layers as layers batch_size = 100 class MyGraphConvModel(tf.keras.Model): def __init__(self): super(MyGraphConvModel, self).__init__() self.gc1 = GraphConv(128, activation_fn=tf.nn.tanh) self.batch_norm1 = layers.BatchNormalization() self.gp1 = GraphPool() self.gc2 = GraphConv(128, activation_fn=tf.nn.tanh) self.batch_norm2 = layers.BatchNormalization() self.gp2 = GraphPool() self.dense1 = layers.Dense(256, activation=tf.nn.tanh) self.batch_norm3 = layers.BatchNormalization() self.readout = GraphGather(batch_size=batch_size, activation_fn=tf.nn.tanh) self.dense2 = layers.Dense(n_tasks*2) self.logits = layers.Reshape((n_tasks, 2)) self.softmax = layers.Softmax() def call(self, inputs): gc1_output = self.gc1(inputs) batch_norm1_output = self.batch_norm1(gc1_output) gp1_output = self.gp1([batch_norm1_output] + inputs[1:]) gc2_output = self.gc2([gp1_output] + inputs[1:]) batch_norm2_output = self.batch_norm1(gc2_output) gp2_output = self.gp2([batch_norm2_output] + inputs[1:]) dense1_output = self.dense1(gp2_output) batch_norm3_output = self.batch_norm3(dense1_output) readout_output = self.readout([batch_norm3_output] + inputs[1:]) logits_output = self.logits(self.dense2(readout_output)) return self.softmax(logits_output) ###Output _____no_output_____ ###Markdown We can now see more clearly what is happening. There are two convolutional blocks, each consisting of a `GraphConv`, followed by batch normalization, followed by a `GraphPool` to do max pooling. We finish up with a dense layer, another batch normalization, a `GraphGather` to combine the data from all the different nodes, and a final dense layer to produce the global output. Let's now create the DeepChem model which will be a wrapper around the Keras model that we just created. We will also specify the loss function so the model know the objective to minimize. ###Code model = dc.models.KerasModel(MyGraphConvModel(), loss=dc.models.losses.CategoricalCrossEntropy()) ###Output _____no_output_____ ###Markdown What are the inputs to this model? A graph convolution requires a complete description of each molecule, including the list of nodes (atoms) and a description of which ones are bonded to each other. In fact, if we inspect the dataset we see that the feature array contains Python objects of type `ConvMol`. ###Code test_dataset.X[0] ###Output _____no_output_____ ###Markdown Models expect arrays of numbers as their inputs, not Python objects. We must convert the `ConvMol` objects into the particular set of arrays expected by the `GraphConv`, `GraphPool`, and `GraphGather` layers. Fortunately, the `ConvMol` class includes the code to do this, as well as to combine all the molecules in a batch to create a single set of arrays.The following code creates a Python generator that given a batch of data generates the lists of inputs, labels, and weights whose values are Numpy arrays. `atom_features` holds a feature vector of length 75 for each atom. The other inputs are required to support minibatching in TensorFlow. `degree_slice` is an indexing convenience that makes it easy to locate atoms from all molecules with a given degree. `membership` determines the membership of atoms in molecules (atom `i` belongs to molecule `membership[i]`). `deg_adjs` is a list that contains adjacency lists grouped by atom degree. For more details, check out the [code](https://github.com/deepchem/deepchem/blob/master/deepchem/feat/mol_graphs.py). ###Code from deepchem.metrics import to_one_hot from deepchem.feat.mol_graphs import ConvMol import numpy as np def data_generator(dataset, epochs=1): for ind, (X_b, y_b, w_b, ids_b) in enumerate(dataset.iterbatches(batch_size, epochs, deterministic=False, pad_batches=True)): multiConvMol = ConvMol.agglomerate_mols(X_b) inputs = [multiConvMol.get_atom_features(), multiConvMol.deg_slice, np.array(multiConvMol.membership)] for i in range(1, len(multiConvMol.get_deg_adjacency_lists())): inputs.append(multiConvMol.get_deg_adjacency_lists()[i]) labels = [to_one_hot(y_b.flatten(), 2).reshape(-1, n_tasks, 2)] weights = [w_b] yield (inputs, labels, weights) ###Output _____no_output_____ ###Markdown Now, we can train the model using `fit_generator(generator)` which will use the generator we've defined to train the model. ###Code model.fit_generator(data_generator(train_dataset, epochs=50)) ###Output _____no_output_____ ###Markdown Now that we have trained our graph convolutional method, let's evaluate its performance. We again have to use our defined generator to evaluate model performance. ###Code print('Training set score:', model.evaluate_generator(data_generator(train_dataset), [metric], transformers)) print('Test set score:', model.evaluate_generator(data_generator(test_dataset), [metric], transformers)) ###Output Training set score: {'roc_auc_score': 0.8425638289185731} Test set score: {'roc_auc_score': 0.7378436684114341} ###Markdown Tutorial Part 6: Introduction to Graph ConvolutionsIn this tutorial we will learn more about "graph convolutions." These are one of the most powerful deep learning tools for working with molecular data. The reason for this is that molecules can be naturally viewed as graphs.![Molecular Graph](https://github.com/deepchem/deepchem/blob/master/examples/tutorials/basic_graphs.gif?raw=1)Note how standard chemical diagrams of the sort we're used to from high school lend themselves naturally to visualizing molecules as graphs. In the remainder of this tutorial, we'll dig into this relationship in significantly more detail. This will let us get a deeper understanding of how these systems work. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/06_Introduction_to_Graph_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine. ###Code !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem ###Output _____no_output_____ ###Markdown What are Graph Convolutions?Consider a standard convolutional neural network (CNN) of the sort commonly used to process images. The input is a grid of pixels. There is a vector of data values for each pixel, for example the red, green, and blue color channels. The data passes through a series of convolutional layers. Each layer combines the data from a pixel and its neighbors to produce a new data vector for the pixel. Early layers detect small scale local patterns, while later layers detect larger, more abstract patterns. Often the convolutional layers alternate with pooling layers that perform some operation such as max or min over local regions.Graph convolutions are similar, but they operate on a graph. They begin with a data vector for each node of the graph (for example, the chemical properties of the atom that node represents). Convolutional and pooling layers combine information from connected nodes (for example, atoms that are bonded to each other) to produce a new data vector for each node. Training a GraphConvModelLet's use the MoleculeNet suite to load the Tox21 dataset. To featurize the data in a way that graph convolutional networks can use, we set the featurizer option to `'GraphConv'`. The MoleculeNet call returns a training set, a validation set, and a test set for us to use. It also returns `tasks`, a list of the task names, and `transformers`, a list of data transformations that were applied to preprocess the dataset. (Most deep networks are quite finicky and require a set of data transformations to ensure that training proceeds stably.) ###Code import deepchem as dc tasks, datasets, transformers = dc.molnet.load_tox21(featurizer='GraphConv') train_dataset, valid_dataset, test_dataset = datasets ###Output _____no_output_____ ###Markdown Let's now train a graph convolutional network on this dataset. DeepChem has the class `GraphConvModel` that wraps a standard graph convolutional architecture underneath the hood for user convenience. Let's instantiate an object of this class and train it on our dataset. ###Code n_tasks = len(tasks) model = dc.models.GraphConvModel(n_tasks, mode='classification') model.fit(train_dataset, nb_epoch=50) ###Output _____no_output_____ ###Markdown Let's try to evaluate the performance of the model we've trained. For this, we need to define a metric, a measure of model performance. `dc.metrics` holds a collection of metrics already. For this dataset, it is standard to use the ROC-AUC score, the area under the receiver operating characteristic curve (which measures the tradeoff between precision and recall). Luckily, the ROC-AUC score is already available in DeepChem. To measure the performance of the model under this metric, we can use the convenience function `model.evaluate()`. ###Code metric = dc.metrics.Metric(dc.metrics.roc_auc_score) print('Training set score:', model.evaluate(train_dataset, [metric], transformers)) print('Test set score:', model.evaluate(test_dataset, [metric], transformers)) ###Output Training set score: {'roc_auc_score': 0.96959686893055} Test set score: {'roc_auc_score': 0.795793783300876} ###Markdown The results are pretty good, and `GraphConvModel` is very easy to use. But what's going on under the hood? Could we build GraphConvModel ourselves? Of course! DeepChem provides Keras layers for all the calculations involved in a graph convolution. We are going to apply the following layers from DeepChem.- `GraphConv` layer: This layer implements the graph convolution. The graph convolution combines per-node feature vectures in a nonlinear fashion with the feature vectors for neighboring nodes. This "blends" information in local neighborhoods of a graph.- `GraphPool` layer: This layer does a max-pooling over the feature vectors of atoms in a neighborhood. You can think of this layer as analogous to a max-pooling layer for 2D convolutions but which operates on graphs instead. - `GraphGather`: Many graph convolutional networks manipulate feature vectors per graph-node. For a molecule for example, each node might represent an atom, and the network would manipulate atomic feature vectors that summarize the local chemistry of the atom. However, at the end of the application, we will likely want to work with a molecule level feature representation. This layer creates a graph level feature vector by combining all the node-level feature vectors.Apart from this we are going to apply standard neural network layers such as [Dense](https://keras.io/api/layers/core_layers/dense/), [BatchNormalization](https://keras.io/api/layers/normalization_layers/batch_normalization/) and [Softmax](https://keras.io/api/layers/activation_layers/softmax/) layer. ###Code from deepchem.models.layers import GraphConv, GraphPool, GraphGather import tensorflow as tf import tensorflow.keras.layers as layers batch_size = 100 class MyGraphConvModel(tf.keras.Model): def __init__(self): super(MyGraphConvModel, self).__init__() self.gc1 = GraphConv(128, activation_fn=tf.nn.tanh) self.batch_norm1 = layers.BatchNormalization() self.gp1 = GraphPool() self.gc2 = GraphConv(128, activation_fn=tf.nn.tanh) self.batch_norm2 = layers.BatchNormalization() self.gp2 = GraphPool() self.dense1 = layers.Dense(256, activation=tf.nn.tanh) self.batch_norm3 = layers.BatchNormalization() self.readout = GraphGather(batch_size=batch_size, activation_fn=tf.nn.tanh) self.dense2 = layers.Dense(n_tasks*2) self.logits = layers.Reshape((n_tasks, 2)) self.softmax = layers.Softmax() def call(self, inputs): gc1_output = self.gc1(inputs) batch_norm1_output = self.batch_norm1(gc1_output) gp1_output = self.gp1([batch_norm1_output] + inputs[1:]) gc2_output = self.gc2([gp1_output] + inputs[1:]) batch_norm2_output = self.batch_norm1(gc2_output) gp2_output = self.gp2([batch_norm2_output] + inputs[1:]) dense1_output = self.dense1(gp2_output) batch_norm3_output = self.batch_norm3(dense1_output) readout_output = self.readout([batch_norm3_output] + inputs[1:]) logits_output = self.logits(self.dense2(readout_output)) return self.softmax(logits_output) ###Output _____no_output_____ ###Markdown We can now see more clearly what is happening. There are two convolutional blocks, each consisting of a `GraphConv`, followed by batch normalization, followed by a `GraphPool` to do max pooling. We finish up with a dense layer, another batch normalization, a `GraphGather` to combine the data from all the different nodes, and a final dense layer to produce the global output. Let's now create the DeepChem model which will be a wrapper around the Keras model that we just created. We will also specify the loss function so the model know the objective to minimize. ###Code model = dc.models.KerasModel(MyGraphConvModel(), loss=dc.models.losses.CategoricalCrossEntropy()) ###Output _____no_output_____ ###Markdown What are the inputs to this model? A graph convolution requires a complete description of each molecule, including the list of nodes (atoms) and a description of which ones are bonded to each other. In fact, if we inspect the dataset we see that the feature array contains Python objects of type `ConvMol`. ###Code test_dataset.X[0] ###Output _____no_output_____ ###Markdown Models expect arrays of numbers as their inputs, not Python objects. We must convert the `ConvMol` objects into the particular set of arrays expected by the `GraphConv`, `GraphPool`, and `GraphGather` layers. Fortunately, the `ConvMol` class includes the code to do this, as well as to combine all the molecules in a batch to create a single set of arrays.The following code creates a Python generator that given a batch of data generates the lists of inputs, labels, and weights whose values are Numpy arrays. `atom_features` holds a feature vector of length 75 for each atom. The other inputs are required to support minibatching in TensorFlow. `degree_slice` is an indexing convenience that makes it easy to locate atoms from all molecules with a given degree. `membership` determines the membership of atoms in molecules (atom `i` belongs to molecule `membership[i]`). `deg_adjs` is a list that contains adjacency lists grouped by atom degree. For more details, check out the [code](https://github.com/deepchem/deepchem/blob/master/deepchem/feat/mol_graphs.py). ###Code from deepchem.metrics import to_one_hot from deepchem.feat.mol_graphs import ConvMol import numpy as np def data_generator(dataset, epochs=1): for ind, (X_b, y_b, w_b, ids_b) in enumerate(dataset.iterbatches(batch_size, epochs, deterministic=False, pad_batches=True)): multiConvMol = ConvMol.agglomerate_mols(X_b) inputs = [multiConvMol.get_atom_features(), multiConvMol.deg_slice, np.array(multiConvMol.membership)] for i in range(1, len(multiConvMol.get_deg_adjacency_lists())): inputs.append(multiConvMol.get_deg_adjacency_lists()[i]) labels = [to_one_hot(y_b.flatten(), 2).reshape(-1, n_tasks, 2)] weights = [w_b] yield (inputs, labels, weights) ###Output _____no_output_____ ###Markdown Now, we can train the model using `fit_generator(generator)` which will use the generator we've defined to train the model. ###Code model.fit_generator(data_generator(train_dataset, epochs=50)) ###Output _____no_output_____ ###Markdown Now that we have trained our graph convolutional method, let's evaluate its performance. We again have to use our defined generator to evaluate model performance. ###Code print('Training set score:', model.evaluate_generator(data_generator(train_dataset), [metric], transformers)) print('Test set score:', model.evaluate_generator(data_generator(test_dataset), [metric], transformers)) ###Output Training set score: {'roc_auc_score': 0.8425638289185731} Test set score: {'roc_auc_score': 0.7378436684114341}
Baselines/DEBERTA/CoLA_Deberta_SMART.ipynb
###Markdown BERT Fine-Tuning on CoLA with SMART and SiFTThis notebook was orginally created by Chris McCormick and Nick Ryan. We made changes for SiFT and SMART, as well as our custom BERT class. Data and Importing Modules ###Code import tensorflow as tf # Get the GPU device name. device_name = tf.test.gpu_device_name() # The device name should look like the following: if device_name == '/device:GPU:0': print('Found GPU at: {}'.format(device_name)) else: raise SystemError('GPU device not found') import torch # If there's a GPU available... if torch.cuda.is_available(): # Tell PyTorch to use the GPU. device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) # If not... else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") !pip install transformers !pip install wget import wget import os print('Downloading dataset...') # The URL for the dataset zip file. url = 'https://nyu-mll.github.io/CoLA/cola_public_1.1.zip' # Download the file (if we haven't already) if not os.path.exists('./cola_public_1.1.zip'): wget.download(url, './cola_public_1.1.zip') # Unzip the dataset (if we haven't already) if not os.path.exists('./cola_public/'): !unzip cola_public_1.1.zip import pandas as pd # Load the dataset into a pandas dataframe. df = pd.read_csv("./cola_public/raw/in_domain_train.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence']) # Report the number of sentences. print('Number of training sentences: {:,}\n'.format(df.shape[0])) # Display 10 random rows from the data. df.sample(10) df.loc[df.label == 0].sample(5)[['sentence', 'label']] # Get the lists of sentences and their labels. sentences = df.sentence.values labels = df.label.values ###Output _____no_output_____ ###Markdown Tokenization and DataLoader ###Code from transformers import DebertaTokenizer print('Loading DeBERTa tokenizer...') tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base', do_lower_case=True) print(' Original: ', sentences[0]) print('Tokenized: ', tokenizer.tokenize(sentences[0])) print('Token IDs: ', tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentences[0]))) max_len = 0 for sent in sentences: input_ids = tokenizer.encode(sent, add_special_tokens=True) max_len = max(max_len, len(input_ids)) print('Max sentence length: ', max_len) input_ids = [] attention_masks = [] for sent in sentences: encoded_dict = tokenizer.encode_plus( sent, add_special_tokens = True, max_length = 64, pad_to_max_length = True, return_attention_mask = True, return_tensors = 'pt', ) input_ids.append(encoded_dict['input_ids']) attention_masks.append(encoded_dict['attention_mask']) input_ids = torch.cat(input_ids, dim=0) attention_masks = torch.cat(attention_masks, dim=0) labels = torch.tensor(labels) print('Original: ', sentences[0]) print('Token IDs:', input_ids[0]) from torch.utils.data import TensorDataset, random_split dataset = TensorDataset(input_ids, attention_masks, labels) train_size = int(0.9 * len(dataset)) val_size = len(dataset) - train_size train_dataset, val_dataset = random_split(dataset, [train_size, val_size]) print('{:>5,} training samples'.format(train_size)) print('{:>5,} validation samples'.format(val_size)) from torch.utils.data import DataLoader, RandomSampler, SequentialSampler batch_size = 32 train_dataloader = DataLoader( train_dataset, sampler = RandomSampler(train_dataset), batch_size = batch_size ) validation_dataloader = DataLoader( val_dataset, sampler = SequentialSampler(val_dataset), batch_size = batch_size ) ###Output _____no_output_____ ###Markdown Custom Deberta Class and Initialization ###Code from transformers import DebertaForSequenceClassification, AdamW, DebertaConfig, DebertaPreTrainedModel, DebertaModel from transformers.models.deberta.modeling_deberta import * #from transformers.modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions import torch import torch.utils.checkpoint from torch import nn from torch.nn import CrossEntropyLoss, MSELoss class CustomDebertaForClassification(DebertaForSequenceClassification): def __init__(self, config): super().__init__(config) #self.bert = BertForSequenceClassification(config).from_pretrained("bert-base-uncased",num_labels = 2,output_attentions = False, output_hidden_states = False) self.embeddings = self.deberta.embeddings self.encoder = self.deberta.encoder self.z_steps = 0 #copied from DebertaModel source code def embed(self, input_ids=None, mask=None, token_type_ids=None, position_ids=None, inputs_embeds=None ): # See: BERTModel.forward return self.embeddings( input_ids=input_ids, token_type_ids=token_type_ids, position_ids=position_ids, mask=mask, inputs_embeds=inputs_embeds ) def predict(self,embedding_output, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_extended_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=True): encoder_outputs = self.encoder( embedding_output, attention_mask, output_hidden_states=True, output_attentions=output_attentions, return_dict=return_dict ) encoded_layers = encoder_outputs[1] if self.z_steps > 1: hidden_states = encoded_layers[-2] layers = [self.encoder.layer[-1] for _ in range(self.z_steps)] query_states = encoded_layers[-1] rel_embeddings = self.encoder.get_rel_embedding() attention_mask = self.encoder.get_attention_mask(attention_mask) rel_pos = self.encoder.get_rel_pos(embedding_output) for layer in layers[1:]: query_states = layer( hidden_states, attention_mask, return_att=False, query_states=query_states, relative_pos=rel_pos, rel_embeddings=rel_embeddings, ) encoded_layers.append(query_states) sequence_output = encoded_layers[-1] # if not return_dict: # return (sequence_output,) + encoder_outputs[(1 if output_hidden_states else 2) :] outputs = BaseModelOutput( last_hidden_state=sequence_output, hidden_states=encoder_outputs.hidden_states if output_hidden_states else None, attentions=encoder_outputs.attentions, ) pooled_output = self.pooler(outputs[0]) pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) return logits #@title model = CustomDebertaForClassification.from_pretrained( "microsoft/deberta-base", num_labels = 2, output_attentions = False, output_hidden_states = False, ) model.cuda() ###Output Some weights of the model checkpoint at microsoft/deberta-base were not used when initializing CustomDebertaForClassification: ['lm_predictions.lm_head.LayerNorm.bias', 'lm_predictions.lm_head.dense.bias', 'lm_predictions.lm_head.dense.weight', 'config', 'lm_predictions.lm_head.LayerNorm.weight', 'lm_predictions.lm_head.bias'] - This IS expected if you are initializing CustomDebertaForClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing CustomDebertaForClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of CustomDebertaForClassification were not initialized from the model checkpoint at microsoft/deberta-base and are newly initialized: ['encoder.layer.5.intermediate.dense.weight', 'classifier.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.3.attention.self.pos_proj.weight', 'encoder.layer.9.attention.self.pos_q_proj.bias', 'encoder.layer.5.attention.self.pos_proj.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.1.attention.self.pos_q_proj.bias', 'embeddings.LayerNorm.bias', 'classifier.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.2.attention.self.pos_q_proj.bias', 'encoder.layer.3.attention.self.q_bias', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.0.attention.self.pos_q_proj.bias', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.8.output.LayerNorm.weight', 'embeddings.LayerNorm.weight', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.11.attention.self.pos_proj.weight', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.0.attention.self.q_bias', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.11.attention.self.pos_q_proj.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.2.attention.self.pos_q_proj.weight', 'encoder.layer.5.attention.self.in_proj.weight', 'encoder.layer.7.attention.self.pos_q_proj.weight', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.4.output.dense.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.0.attention.self.in_proj.weight', 'encoder.layer.1.attention.self.in_proj.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.10.attention.self.in_proj.weight', 'encoder.layer.10.attention.self.q_bias', 'encoder.layer.9.attention.self.in_proj.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.2.attention.self.in_proj.weight', 'encoder.layer.6.attention.self.pos_q_proj.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.self.pos_proj.weight', 'encoder.layer.6.attention.self.pos_q_proj.weight', 'encoder.layer.6.attention.self.in_proj.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.3.attention.self.pos_q_proj.weight', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'embeddings.word_embeddings.weight', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.5.output.dense.weight', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.self.pos_q_proj.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.1.output.dense.bias', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.0.attention.self.pos_proj.weight', 'encoder.layer.5.attention.self.pos_q_proj.weight', 'encoder.layer.8.attention.self.v_bias', 'encoder.layer.11.attention.self.in_proj.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.4.attention.self.pos_q_proj.weight', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.6.attention.self.q_bias', 'encoder.layer.2.attention.self.pos_proj.weight', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.7.output.dense.bias', 'encoder.rel_embeddings.weight', 'pooler.dense.bias', 'encoder.layer.3.attention.self.in_proj.weight', 'encoder.layer.9.attention.self.q_bias', 'encoder.layer.5.attention.self.pos_q_proj.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.self.pos_q_proj.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.1.attention.self.v_bias', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.8.attention.self.in_proj.weight', 'encoder.layer.5.attention.self.q_bias', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.3.attention.self.v_bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.1.attention.self.q_bias', 'encoder.layer.10.attention.self.pos_proj.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.6.attention.self.v_bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.self.q_bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.self.pos_q_proj.bias', 'pooler.dense.weight', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.11.attention.self.v_bias', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.10.output.dense.weight', 'encoder.layer.7.attention.self.in_proj.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.1.attention.self.pos_q_proj.weight', 'encoder.layer.10.attention.self.pos_q_proj.bias', 'encoder.layer.11.attention.self.q_bias', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.self.pos_q_proj.bias', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.0.attention.self.pos_q_proj.weight', 'encoder.layer.11.output.dense.weight', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.10.attention.self.pos_q_proj.weight', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.3.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.9.attention.self.pos_q_proj.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.0.attention.self.v_bias', 'encoder.layer.4.attention.self.in_proj.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.4.attention.self.q_bias', 'encoder.layer.1.attention.self.pos_proj.weight', 'encoder.layer.2.attention.self.v_bias', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.7.attention.self.q_bias', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.8.attention.self.pos_proj.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.7.attention.self.pos_proj.weight', 'encoder.layer.10.attention.self.v_bias', 'encoder.layer.4.attention.self.v_bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.attention.self.q_bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.9.attention.self.v_bias', 'encoder.layer.7.attention.self.pos_q_proj.bias', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.self.pos_proj.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.pos_q_proj.weight', 'encoder.layer.7.attention.self.v_bias', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.9.attention.self.pos_proj.weight', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.output.dense.bias', 'encoder.layer.5.attention.self.v_bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ###Markdown Noise Function ###Code from torch.nn import LayerNorm import torch.nn.functional as F def normalize_embed(embed): embed_mean = torch.mean(embed,dim=(1,2)) embed_std = torch.std(embed, dim=(1,2)) embed_clone = torch.clone(embed) for i in range(0,embed_clone.size()[0]): # embed_clone[i] = torch.div(torch.sub(embed_clone[i],embed_mean[i]),embed_std[i]) embed_clone[i] = (embed_clone[i] - embed_mean[i]) / embed_std[i] return embed_clone, embed_mean, embed_std def denormalize_embed(embed, embed_mean, embed_std): for i in range(0,embed.size()[0]): # embed[i] = (embed[i] - embed_mean[i]) / embed_std[i] embed[i] = (embed[i] * embed_std[i]) + embed_mean[i] return embed def stable_kl(logit, target, epsilon=1e-6, reduce=True): logit = logit.view(-1, logit.size(-1)).float() target = target.view(-1, target.size(-1)).float() bs = logit.size(0) p = F.log_softmax(logit, 1).exp() y = F.log_softmax(target, 1).exp() rp = -(1.0/(p + epsilon) -1 + epsilon).detach().log() ry = -(1.0/(y + epsilon) -1 + epsilon).detach().log() if reduce: return (p* (rp- ry) * 2).sum() / bs else: return (p* (rp- ry) * 2).sum() def _norm_grad(grad, epsilon = 1e-6, eff_grad=None, sentence_level=False): if sentence_level: direction = grad / (grad.abs().max((-2, -1), keepdim=True)[0] + epsilon) else: direction = grad / (grad.abs().max(-1, keepdim=True)[0] + epsilon) eff_direction = eff_grad / (grad.abs().max(-1, keepdim=True)[0] + epsilon) return direction, eff_direction def noise(embed, model, attention_mask, step_size, normalize=False, k=1, mean=0, std=0.01): if normalize == True: logits = model.predict(embed,attention_mask) # LNorm = LayerNorm(embed.size(),elementwise_affine=False) # normalized_embed = LNorm(embed) normalized_embed, embed_mean, embed_std = normalize_embed(embed) noise = torch.normal(mean=0, std=0.01,size=(normalized_embed.size()[0],normalized_embed.size()[1],normalized_embed.size()[2])) noise = noise.to(device) noise.requires_grad_() noised_normalized_embeddings = normalized_embed+noise adv_logits = model.predict(noised_normalized_embeddings, attention_mask) adv_loss = stable_kl(adv_logits, logits.detach(), reduce=False) delta_grad, = torch.autograd.grad(adv_loss, noise, only_inputs=True, retain_graph=False) norm = delta_grad.norm() # if (torch.isnan(norm) or torch.isinf(norm)): # return 0 eff_delta_grad = delta_grad * step_size delta_grad = noise + delta_grad * step_size noise, eff_noise = _norm_grad(delta_grad, eff_grad=eff_delta_grad, sentence_level=0) noise = noise.detach() noised_normalized_embeddings = normalized_embed+noise denormalize_noised_embed = denormalize_embed(noised_normalized_embeddings,embed_mean, embed_std) return denormalize_noised_embed else: logits = model.predict(embed,attention_mask) noise = torch.normal(mean=0, std=0.01,size=(embed.size()[0],embed.size()[1],embed.size()[2])) noise = noise.to(device) noise.requires_grad_() noised_embeddings = embed+noise adv_logits = model.predict(noised_embeddings, attention_mask) adv_loss = stable_kl(adv_logits, logits.detach(), reduce=False) delta_grad, = torch.autograd.grad(adv_loss, noise, only_inputs=True, retain_graph=False) norm = delta_grad.norm() # if (torch.isnan(norm) or torch.isinf(norm)): # return 0 eff_delta_grad = delta_grad * step_size delta_grad = noise + delta_grad * step_size noise, eff_noise = _norm_grad(delta_grad, eff_grad=eff_delta_grad, sentence_level=0) noise = noise.detach() noised_embeddings = embed+noise return noised_embeddings ###Output _____no_output_____ ###Markdown Optimizer, Scheduler, and Some Other Training Prep ###Code #@title optimizer = AdamW(model.parameters(), lr = 2e-5, eps = 1e-8 ) #@title from transformers import get_linear_schedule_with_warmup epochs = 6 total_steps = len(train_dataloader) * epochs scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, num_training_steps = total_steps ) #@title import numpy as np def flat_accuracy(preds, labels): pred_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() return np.sum(pred_flat == labels_flat) / len(labels_flat) #@title import time import datetime def format_time(elapsed): elapsed_rounded = int(round((elapsed))) return str(datetime.timedelta(seconds=elapsed_rounded)) MODE = "SMART-adv-only" ###Output _____no_output_____ ###Markdown Training Loop with Validation ###Code import random import numpy as np seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) training_stats = [] total_t0 = time.time() # For each epoch... for epoch_i in range(0, epochs): # ======================================== # Training # ======================================== # Perform one full pass over the training set. print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') # Measure how long the training epoch takes. t0 = time.time() total_train_loss = 0 model.train() # For each batch of training data... for step, batch in enumerate(train_dataloader): # Progress update every 40 batches. if step % 40 == 0 and not step == 0: elapsed = format_time(time.time() - t0) print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed)) b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) model.zero_grad() embed = model.embed(input_ids = b_input_ids,mask = b_input_mask) preds = model.predict(embedding_output = embed,attention_mask = b_input_mask) loss_fct = CrossEntropyLoss() regular_loss = loss_fct(preds.view(-1,2), b_labels.view(-1)) loss_list = [regular_loss] if MODE in ["SMART-adv-only", "SIFT"]: normalise = True if MODE == "SIFT" else False noised_embeddings = noise(embed, model, b_input_mask, 1e-3, normalize=normalise, k=1) adv_logits = model.predict(noised_embeddings, b_input_mask) adv_loss = stable_kl(preds.view(-1,2), adv_logits.view(-1,2)) loss_list.append(adv_loss) loss = sum(loss_list) # END MODEL total_train_loss += loss.item() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() avg_train_loss = total_train_loss / len(train_dataloader) training_time = format_time(time.time() - t0) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Training epcoh took: {:}".format(training_time)) # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. print("") print("Running Validation...") t0 = time.time() model.eval() total_eval_accuracy = 0 total_eval_loss = 0 nb_eval_steps = 0 # Evaluate data for one epoch for batch in validation_dataloader: b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) with torch.no_grad(): result = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels, return_dict=True) loss = result.loss logits = result.logits total_eval_loss += loss.item() logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() total_eval_accuracy += flat_accuracy(logits, label_ids) avg_val_accuracy = total_eval_accuracy / len(validation_dataloader) print(" Accuracy: {0:.2f}".format(avg_val_accuracy)) avg_val_loss = total_eval_loss / len(validation_dataloader) validation_time = format_time(time.time() - t0) print(" Validation Loss: {0:.2f}".format(avg_val_loss)) print(" Validation took: {:}".format(validation_time)) training_stats.append( { 'epoch': epoch_i + 1, 'Training Loss': avg_train_loss, 'Valid. Loss': avg_val_loss, 'Valid. Accur.': avg_val_accuracy, 'Training Time': training_time, 'Validation Time': validation_time } ) print("") print("Training complete!") print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0))) ###Output ======== Epoch 1 / 6 ======== Training... Batch 40 of 241. Elapsed: 0:00:26. Batch 80 of 241. Elapsed: 0:00:52. Batch 120 of 241. Elapsed: 0:01:18. Batch 160 of 241. Elapsed: 0:01:44. Batch 200 of 241. Elapsed: 0:02:10. Batch 240 of 241. Elapsed: 0:02:36. Average training loss: 0.63 Training epcoh took: 0:02:37 Running Validation... Accuracy: 0.81 Validation Loss: 0.45 Validation took: 0:00:02 ======== Epoch 2 / 6 ======== Training... Batch 40 of 241. Elapsed: 0:00:26. Batch 80 of 241. Elapsed: 0:00:52. Batch 120 of 241. Elapsed: 0:01:18. Batch 160 of 241. Elapsed: 0:01:44. Batch 200 of 241. Elapsed: 0:02:10. Batch 240 of 241. Elapsed: 0:02:36. Average training loss: 0.57 Training epcoh took: 0:02:37 Running Validation... Accuracy: 0.85 Validation Loss: 0.38 Validation took: 0:00:02 ======== Epoch 3 / 6 ======== Training... Batch 40 of 241. Elapsed: 0:00:26. Batch 80 of 241. Elapsed: 0:00:52. Batch 120 of 241. Elapsed: 0:01:18. Batch 160 of 241. Elapsed: 0:01:44. Batch 200 of 241. Elapsed: 0:02:10. Batch 240 of 241. Elapsed: 0:02:36. Average training loss: 0.53 Training epcoh took: 0:02:37 Running Validation... Accuracy: 0.84 Validation Loss: 0.36 Validation took: 0:00:02 ======== Epoch 4 / 6 ======== Training... Batch 40 of 241. Elapsed: 0:00:26. Batch 80 of 241. Elapsed: 0:00:52. Batch 120 of 241. Elapsed: 0:01:18. Batch 160 of 241. Elapsed: 0:01:44. Batch 200 of 241. Elapsed: 0:02:10. Batch 240 of 241. Elapsed: 0:02:36. Average training loss: 0.51 Training epcoh took: 0:02:36 Running Validation... Accuracy: 0.84 Validation Loss: 0.37 Validation took: 0:00:02 ======== Epoch 5 / 6 ======== Training... Batch 40 of 241. Elapsed: 0:00:26. Batch 80 of 241. Elapsed: 0:00:52. Batch 120 of 241. Elapsed: 0:01:18. Batch 160 of 241. Elapsed: 0:01:44. Batch 200 of 241. Elapsed: 0:02:10. Batch 240 of 241. Elapsed: 0:02:36. Average training loss: 0.49 Training epcoh took: 0:02:36 Running Validation... Accuracy: 0.85 Validation Loss: 0.36 Validation took: 0:00:02 ======== Epoch 6 / 6 ======== Training... Batch 40 of 241. Elapsed: 0:00:26. Batch 80 of 241. Elapsed: 0:00:52. Batch 120 of 241. Elapsed: 0:01:18. Batch 160 of 241. Elapsed: 0:01:44. Batch 200 of 241. Elapsed: 0:02:10. Batch 240 of 241. Elapsed: 0:02:36. Average training loss: 0.47 Training epcoh took: 0:02:36 Running Validation... Accuracy: 0.86 Validation Loss: 0.35 Validation took: 0:00:02 Training complete! Total training took 0:15:51 (h:mm:ss) ###Markdown Let's view the summary of the training process. ###Code import pandas as pd pd.set_option('precision', 2) df_stats = pd.DataFrame(data=training_stats) df_stats = df_stats.set_index('epoch') df_stats import matplotlib.pyplot as plt % matplotlib inline import seaborn as sns # Use plot styling from seaborn. sns.set(style='darkgrid') # Increase the plot size and font size. sns.set(font_scale=1.5) plt.rcParams["figure.figsize"] = (12,6) # Plot the learning curve. plt.plot(df_stats['Training Loss'], 'b-o', label="Training") plt.plot(df_stats['Valid. Loss'], 'g-o', label="Validation") # Label the plot. plt.title("Training & Validation Loss") plt.xlabel("Epoch") plt.ylabel("Loss") plt.legend() plt.xticks([1, 2, 3, 4]) plt.show() ###Output _____no_output_____ ###Markdown Performance On Test Set Data Preparation We'll need to apply all of the same steps that we did for the training data to prepare our test data set. ###Code import pandas as pd # Load the dataset into a pandas dataframe. df = pd.read_csv("./cola_public/raw/out_of_domain_dev.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence']) # Report the number of sentences. print('Number of test sentences: {:,}\n'.format(df.shape[0])) # Create sentence and label lists sentences = df.sentence.values labels = df.label.values # Tokenize all of the sentences and map the tokens to thier word IDs. input_ids = [] attention_masks = [] for sent in sentences: encoded_dict = tokenizer.encode_plus( sent, add_special_tokens = True, max_length = 64, pad_to_max_length = True, return_attention_mask = True, return_tensors = 'pt', ) input_ids.append(encoded_dict['input_ids']) attention_masks.append(encoded_dict['attention_mask']) input_ids = torch.cat(input_ids, dim=0) attention_masks = torch.cat(attention_masks, dim=0) labels = torch.tensor(labels) batch_size = 32 prediction_data = TensorDataset(input_ids, attention_masks, labels) prediction_sampler = SequentialSampler(prediction_data) prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=batch_size) ###Output Number of test sentences: 516 ###Markdown Evaluate on Test Set With the test set prepared, we can apply our fine-tuned model to generate predictions on the test set. ###Code # Prediction on test set print('Predicting labels for {:,} test sentences...'.format(len(input_ids))) model.eval() predictions , true_labels = [], [] for batch in prediction_dataloader: batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch with torch.no_grad(): result = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, return_dict=True) logits = result.logits logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() predictions.append(logits) true_labels.append(label_ids) print(' DONE.') ###Output Predicting labels for 516 test sentences... DONE. ###Markdown Accuracy on the CoLA benchmark is measured using the "[Matthews correlation coefficient](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html)" (MCC).We use MCC here because the classes are imbalanced: ###Code print('Positive samples: %d of %d (%.2f%%)' % (df.label.sum(), len(df.label), (df.label.sum() / len(df.label) * 100.0))) from sklearn.metrics import matthews_corrcoef matthews_set = [] # Evaluate each test batch using Matthew's correlation coefficient print('Calculating Matthews Corr. Coef. for each batch...') # For each input batch... for i in range(len(true_labels)): # The predictions for this batch are a 2-column ndarray (one column for "0" # and one column for "1"). Pick the label with the highest value and turn this # in to a list of 0s and 1s. pred_labels_i = np.argmax(predictions[i], axis=1).flatten() # Calculate and store the coef for this batch. matthews = matthews_corrcoef(true_labels[i], pred_labels_i) matthews_set.append(matthews) # Create a barplot showing the MCC score for each batch of test samples. ax = sns.barplot(x=list(range(len(matthews_set))), y=matthews_set, ci=None) plt.title('MCC Score per Batch') plt.ylabel('MCC Score (-1 to +1)') plt.xlabel('Batch #') plt.show() ###Output _____no_output_____ ###Markdown Now we'll combine the results for all of the batches and calculate our final MCC score. ###Code # Combine the results across all batches. flat_predictions = np.concatenate(predictions, axis=0) # For each sample, pick the label (0 or 1) with the higher score. flat_predictions = np.argmax(flat_predictions, axis=1).flatten() # Combine the correct labels for each batch into a single list. flat_true_labels = np.concatenate(true_labels, axis=0) # Calculate the MCC mcc = matthews_corrcoef(flat_true_labels, flat_predictions) print('Total MCC: %.3f' % mcc) ###Output Total MCC: 0.576
Algorithms/Sorting with python.ipynb
###Markdown **Sorting with python.** **Bubble Sorting.** * simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in wrong order.* ![Imgur](https://i.imgur.com/9Pxe3S3.png) **Complexities.*** **Worst and Average Case Time Complexity:** O(n*n). * Worst case occurs when array is reverse sorted.* **Best Case Time Complexity:** O(n). * Best case occurs when array is already sorted.* **Auxiliary Space:** O(1)* **Boundary Cases:** Bubble sort takes minimum time (Order of n) when elements are already sorted.* **Sorting In Place:** Yes* **Stable:** Yes* Due to its simplicity, bubble sort is often used to introduce the concept of a sorting algorithm. **Usage.*** In computer graphics it is popular for its capability to detect a very small error (like swap of just two elements) in almost-sorted arrays and fix it with just linear complexity (2n). ###Code #let's write a code for bubble sort. def Bubble_Sort(array): n=len(array) for i in range(n): for j in range(0,n-i-1): if array[j]>array[j+1]: array[j],array[j+1]=array[j+1],array[j] #driver program. if __name__ == "__main__": arr=[5,1,4,2,8,9] print("The array before sorting is: ",arr) #now let's call the bubble sort function. sort_array = Bubble_Sort(arr) print("The array after sorting is: ",arr) ###Output The array before sorting is: [5, 1, 4, 2, 8, 9] The array after sorting is: [1, 2, 4, 5, 8, 9] ###Markdown **Insertion Sorting.*** Insertion sort is a simple sorting algorithm that works the way we sort playing cards in our hands. **Illustration.*** array = 12, 11, 13, 5, 6* Let us loop for i = 1 (second element of the array) to 4 (last element of the array)* i = 1. Since 11 is smaller than 12, move 12 and insert 11 before 12 * 11, 12, 13, 5, 6* i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13 * 11, 12, 13, 5, 6* i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one position ahead of their current position. * 5, 11, 12, 13, 6* i = 4. 6 will move to position after 5, and elements from 11 to 13 will move one position ahead of their current position. * 5, 6, 11, 12, 13 ###Code #function for insertion sort def Insertion_Sort(array): #first of all we'll store the key variable viz. a boundary between sorted and unsorted array #then we'll compare the sorted array with the key.. if key is smaller than sorted array then we swap all #elements in sorted array n=len(array) for i in range(1,n): print("i: ",i) key = array[i] j=i-1 print("j: ",j) print("array[j]: ",array[j]) while j>=0 and key<array[j]: array[j+1]=array[j] j-=1 array[j+1]=key #let's monitor array at every stage. print("array: ",arr) #driver program. if __name__ == "__main__": arr=[5,1,4,2,8,9] print("The array before sorting is: ",arr) #now let's call the bubble sort function. Insertion_Sort(arr) print("The array after sorting is: ",arr) ###Output The array before sorting is: [5, 1, 4, 2, 8, 9] i: 1 j: 0 array[j]: 5 array: [1, 5, 4, 2, 8, 9] i: 2 j: 1 array[j]: 5 array: [1, 4, 5, 2, 8, 9] i: 3 j: 2 array[j]: 5 array: [1, 2, 4, 5, 8, 9] i: 4 j: 3 array[j]: 5 array: [1, 2, 4, 5, 8, 9] i: 5 j: 4 array[j]: 8 array: [1, 2, 4, 5, 8, 9] The array after sorting is: [1, 2, 4, 5, 8, 9] ###Markdown **Selection Sort.*** The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering ascending order) from unsorted part and putting it at the beginning. The algorithm maintains two subarrays in a given array.* 1) The subarray which is already sorted.* 2) Remaining subarray which is unsorted.* In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray. ###Code #let's write a code for selection sort using python. def Selection_Sort(array): n=len(array) for i in range(n): min_id=i for j in range(i+1,n): if array[min_id]>array[j]: min_id=j array[i],array[min_id]=array[min_id],array[i] #driver program. if __name__ == "__main__": arr=[5,1,4,2,8,9] print("The array before sorting is: ",arr) #now let's call the bubble sort function. Selection_Sort(arr) print("The array after sorting is: ",arr) ###Output The array before sorting is: [5, 1, 4, 2, 8, 9] The array after sorting is: [1, 2, 4, 5, 8, 9]
content/ch-gates/basic-circuit-identities.ipynb
###Markdown Basic Circuit Identities ###Code from qiskit import * from qiskit.circuit import Gate ###Output _____no_output_____ ###Markdown When we program quantum computers, our aim is always to build useful quantum circuits from the basic building blocks. But sometimes, we might not have all the basic building blocks we want. In this section, we'll look at how we can transform basic gates into each other, and how to use them to build some gates that are slightly more complex \(but still pretty basic\).Many of the techniques discussed in this chapter were first proposed in a paper by Barenco and coauthors in 1995 [1]. Making a controlled-$Z$ from a CNOT The controlled-Z or `cz` gate is another well-used two-qubit gate. Just as the CNOT applies an $X$ to its target qubit whenever its control is in state $|1\rangle$, the controlled-$Z$ applies a $Z$ in the same case. In Qiskit it can be invoked directly with```python a controlled-Zqc.cz(c,t)```where c and t are the control and target qubits. In IBM Q devices, however, the only kind of two-qubit gate that can be directly applied is the CNOT. We therefore need a way to transform one to the other.The process for this is quite simple. We know that the Hadamard transforms the states $|0\rangle$ and $|1\rangle$ to the states $|+\rangle$ and $|-\rangle$. We also know that the effect of the $Z$ gate on the states $|+\rangle$ and $|-\rangle$ is the same as that for $X$ on the state $|0\rangle$ and $|1\rangle$. From this reasoning, or from simply multiplying matrices, we find that$$H X H = Z,\\\\H Z H = X.$$The same trick can be used to transform a CNOT into a controlled-$Z$. All we need to do is precede and follow the CNOT with a Hadamard on the target qubit. This will transform any $X$ applied to that qubit into a $Z$.```python also a controlled-Zqc.h(t)qc.cx(c,t)qc.h(t)```More generally, we can transform a single CNOT into a controlled version of any rotation around the Bloch sphere by an angle $\pi$, by simply preceding and following it with the correct rotations. For example, a controlled-$Y$:```python a controlled-Yqc.sdg(t)qc.cx(c,t)qc.s(t)```and a controlled-$H$:```python a controlled-Hqc.ry(-pi/4,t)qc.cx(c,t)qc.ry(pi/4,t)``` Swapping qubits Sometimes we need to move information around in a quantum computer. For some qubit implementations, this could be done by physically moving them. Another option is simply to move the state between two qubits. This is done by the SWAP gate.```python swaps states of qubits a and bqc.swap(a,b)```The command above directly invokes this gate, but let's see how we might make it using our standard gate set. For this, we'll need to consider a few examples.First, we'll look at the case that qubit a is in state $|1\rangle$ and qubit b is in state $|0\rangle$. For this we'll apply the following gates:```python swap a 1 from a to bqc.cx(a,b) copies 1 from a to bqc.cx(b,a) uses the 1 on b to rotate the state of a to 0```This has the effect of putting qubit b in state $|1\rangle$ and qubit a in state $|0\rangle$. In this case at least, we have done a SWAP.Now let's take this state and SWAP back to the original one. As you may have guessed, we can do this with the reverse of the above process:```python swap a q from b to aqc.cx(b,a) copies 1 from b to aqc.cx(a,b) uses the 1 on a to rotate the state of b to 0```Note that in these two processes, the first gate of one would have no effect on the initial state of the other. For example, when we swap the $|1\rangle$ b to a, the first gate is `cx(b,a)`. If this were instead applied to a state where no $|1\rangle$ was initially on b, it would have no effect.Note also that for these two processes, the final gate of one would have no effect on the final state of the other. For example, the final `cx(b,a)` that is required when we swap the $|1\rangle$ from a to b has no effect on the state where the $|1\rangle$ is not on b.With these observations, we can combine the two processes by adding an ineffective gate from one onto the other. For example,```pythonqc.cx(b,a)qc.cx(a,b)qc.cx(b,a)```We can think of this as a process that swaps a $|1\rangle$ from a to b, but with a useless `qc.cx(b,a)` at the beginning. We can also think of it as a process that swaps a $|1\rangle$ from b to a, but with a useless `qc.cx(b,a)` at the end. Either way, the result is a process that can do the swap both ways around.It also has the correct effect on the $|00\rangle$ state. This is symmetric, and so swapping the states should have no effect. Since the CNOT gates have no effect when their control qubits are $|0\rangle$, the process correctly does nothing.The $|11\rangle$ state is also symmetric, and so needs a trivial effect from the swap. In this case, the first CNOT gate in the process above will cause the second to have no effect, and the third undoes the first. Therefore, the whole effect is indeed trivial.We have thus found a way to decompose SWAP gates into our standard gate set of single-qubit rotations and CNOT gates.```python swaps states of qubits a and bqc.cx(b,a)qc.cx(a,b)qc.cx(b,a)```It works for the states $|00\rangle$, $|01\rangle$, $|10\rangle$ and $|11\rangle$, as well as for all superpositions of them. It therefore swaps all possible two-qubit states.The same effect would also result if we changed the order of the CNOT gates:```python swaps states of qubits a and bqc.cx(a,b)qc.cx(b,a)qc.cx(a,b)```This is an equally valid way to get the SWAP gate.The derivation used here was very much based on the z basis states, but it could also be done by thinking about what is required to swap qubits in states $|+\rangle$ and $|-\rangle$. The resulting ways of implementing the SWAP gate will be completely equivalent to the ones here. Making the CNOTs we need from the CNOTs we have The gates in any quantum computer are driven by the physics of the underlying system. In IBM Q devices, the physics behind CNOTs means that they cannot be directly applied to all possible pairs of qubits. For those pairs for which a CNOT can be applied, it typically has a particular orientation. One specific qubit must act as control, and the other must act as the target, without allowing us to choose. Changing the direction of a CNOTLet's deal with the second problem described above: If we have a CNOT with control qubit $c$ and target qubit $t$, how can we make one for which qubit $t$ acts as the control and qubit $c$ is the target?This question would be very simple to answer for the controlled-$Z$. For this gate, it doesn't matter which way around the control and target qubits are.```pythonqc.cz(c,t)```has exactly the same effect as ```pythonqc.cz(t,c)```This means that we can think of either one as the control, and the other as the target.To see why this is true, let's remind ourselves of what the Z gate is:$$Z= \begin{pmatrix} 1&0 \\\\ 0&-1 \end{pmatrix}.$$We can think of this as multiplying the state by $-1$, but only when it is $|1\rangle$.For a controlled-$Z$ gate, the control qubit must be in state $|1\rangle$ for a $Z$ to be applied to the target qubit. Given the above property of $Z$, this only has an effect when the target is in state $|1\rangle$. We can therefore think of the controlled-$Z$ gate as one that multiplies the state of two qubits by $-1$, but only when the state is $|11\rangle$.This new interpretation is phrased in a perfectly symmetric way, and demonstrates that the labels of 'control' and 'target' are not necessary for this gate.This property gives us a way to reverse the orientation of a CNOT. We can first turn the CNOT into a controlled-$Z$ by using the method described earlier: placing a Hadamard both before and after on the target qubit.```python a czqc.h(t)qc.cx(c,t)qc.h(t)```Then, since we are free to choose which way around to think about a controlled-$Z$'s action, we can choose to think of $t$ as the control and $c$ as the target. We can then transform this controlled-$Z$ into a corresponding CNOT. We just need to place a Hadamard both before and after on the target qubit \(which is now qubit $c$\).```python a cx with control qubit t and target qubit cqc.h(c)qc.h(t)qc.cx(c,t)qc.h(t)qc.h(c)```And there we have it: we've turned around the CNOT. All that is needed is a Hadamard on both qubits before and after.The rest of this subsection is dedicated to another explanation of how to turn around a CNOT, with a bit of math (introduced in the 'States for Many Qubits' article of the previous chapter, and the 'Fun with Matrices' article of this chapter), and some different insight. Feel free to skip over it.Here is another way to write the CNOT gate:$${\rm CX}_{c,t} = |0\rangle \langle0| \otimes I + |1\rangle \langle1| \otimes X.$$Here the $|1\rangle \langle1|$ ensures that the second term only affects those parts of a superposition for which the control qubit $c$ is in state $|1\rangle$. For those, the effect on the target qubit t is $X$. The first terms similarly address those parts of the superposition for which the control qubit is in state $|0\rangle$, in which case it leaves the target qubit unaffected.Now let's do a little math. The $X$ gate has eigenvalues $\pm 1$ for the states $|+\rangle$ and $|-\rangle$. The $I$ gate has an eigenvalue of $1$ for all states including $|+\rangle$ and $|-\rangle$. We can thus write them in spectral form as$$X = |+\rangle \langle+| \, \, - \, \, |-\rangle \langle-|, \, \, \, \, I = |+\rangle \langle+| \, \, + \, \, |-\rangle \langle-|$$Substituting these into the expression above gives us$${\rm CX}_{c,t} = |0\rangle \langle0| \otimes |+\rangle \langle+| \, \, + \, \, |0\rangle \langle0| \otimes |-\rangle \langle-| \, \, + \, \, |1\rangle \langle1| \otimes |+\rangle \langle+| \, \, - \, \, |1\rangle \langle1| \otimes |-\rangle \langle-|$$Using the states $|0\rangle$ and $|1\rangle$, we can write the $Z$ gate in spectral form, and also use an alternative \(but completely equivalent\) spectral form for $I$:$$Z = |0\rangle \langle0| ~-~ |1\rangle \langle1|, ~~~ I = |0\rangle \langle0| ~+~ |1\rangle \langle1|.$$With these, we can factorize the parts of the CNOT expressed with the $|0\rangle$ and $|1\rangle$ state:$${\rm CX}_{c,t} = I \otimes |+\rangle \langle+| \, \, + \, \, Z \otimes |-\rangle \langle-|$$This gives us a whole new way to interpret the effect of the CNOT. The $Z \otimes |-\rangle \langle-| $ term addresses the parts of a superposition for which qubit $t$ is in state $|-\rangle$ and then applies a $Z$ gate to qubit $c$. The other term similarly does nothing to qubit $c$ when qubit $t$ is in state $|+\rangle.$ In this new interpretation, it is qubit $t$ that acts as the control. It is the $|+\rangle$ and $|-\rangle$ states that decide whether an action is performed, and that action is the gate $Z$. This sounds like a very different gate to our familiar CNOT, and yet it is the CNOT. These are two equally true descriptions of its effects.Among the many uses of this property is the method to turn around a CNOT. For example, consider applying a Hadamard to qubit $c$ both before and after this CNOT:```pythonh(c)cx(c,t)h(c)```This transforms the $Z$ in the $Z \otimes |-\rangle \langle-| $ term into an $X$, and leaves the other term unchanged. The combined effect is then a gate that applies an $X$ to qubit $c$ when qubit $t$ is in state $|-\rangle$. This is halfway to what we are wanting to build.To complete the process, we can apply a Hadamard both before and after on qubit $t$. This transforms the $|+\rangle$ and $|-\rangle$ states in each term into $|0\rangle$ and $|1\rangle$. Now we have something that applies an $X$ to qubit $c$ when qubit $t$ is in state $|1\rangle$. This is exactly what we want: a CNOT in reverse, with qubit $t$ as the control and $c$ as the target. CNOT between distant qubitsSuppose we have a control qubit $c$ and a target qubit $t$, and we want to do a CNOT gate between them. If this gate is directly possible on a device, we can just do it. If it's only possible to do the CNOT in the wrong direction, we can use the method explained above. But what if qubits $c$ and $t$ are not connected at all?If qubits $c$ and $t$ are on completely different devices in completely different labs in completely different countries, you may be out of luck. But consider the case where it is possible to do a CNOT between qubit $c$ and an additional qubit $a$, and it is also possible to do one between qubits $a$ and $t$. The new qubit can then be used to mediate the interaction between $c$ and $t$.One way to do this is with the SWAP gate. We can simply SWAP $a$ and t, do the CNOT between $c$ and $a$, and then swap $a$ and $t$ back again. The end result is that we have effectively done a CNOT between $c$ and $t$. The drawback of this method is that it costs a lot of CNOT gates, with six needed to implement the two SWAPs.Another method is to use the following sequence of gates.```python a CNOT between qubits c and t, with no end effect on qubit aqc.cx(a,t)qc.cx(c,a)qc.cx(a,t)qc.cx(c,a)```To see how this works, first consider the case where qubit $c$ is in state $|0\rangle$. The effect of the `cx(c,a)` gates in this case are trivial. This leaves only the two `cx(a,t)` gates, which cancel each other out. The net effect is therefore that nothing happens.If qubit $c$ is in state $|1\rangle$, things are not quite so simple. The effect of the `cx(c,a)` gates is to toggle the value of qubit $a$; it turns any $|0\rangle$ in the state of qubit $a$ into $|1\rangle$ and back again, and vice versa.This toggle effect affects the action of the two `cx(a,t)` gates. It ensures that whenever one is controlled on a $|0\rangle$ and has trivial effect, the other is controlled on a $|1\rangle$ and applies an $X$ to qubit $t$. The end effect is that qubit $a$ is left unchanged, but qubit $t$ will always have had an $X$ applied to it.Putting everything together, this means that an $X$ is applied to qubit $t$ only when qubit $c$ is in state $|1\rangle$. Qubit $a$ is left unaffected. We have therefore engineered a CNOT between qubits $c$ and $t$. Unlike when using SWAP gates, this required only four CNOT gates to implement.It is similarly possible to engineer CNOT gates when there is a longer chain of qubits required to connect our desired control and target. The methods described above simply need to be scaled up. Controlled rotations We have already seen how to build controlled $\pi$ rotations from a single CNOT gate. Now we'll look at how to build any controlled rotation.First, let's consider arbitrary rotations around the y axis. Specifically, consider the following sequence of gates.```pythonqc.ry(theta/2,t)qc.cx(c,t)qc.ry(-theta/2,t)qc.cx(c,t)```If the control qubit is in state $|0\rangle$, all we have here is a $R_y(\theta/2)$ immediately followed by its inverse, $R_y(-\theta/2)$. The end effect is trivial. If the control qubit is in state $|1\rangle$, however, the `ry(-theta/2)` is effectively preceded and followed by an X gate. This has the effect of flipping the direction of the y rotation and making a second $R_y(\theta/2)$. The net effect in this case is therefore to make a controlled version of the rotation $R_y(\theta)$. This method works because the x and y axis are orthogonal, which causes the x gates to flip the direction of the rotation. It therefore similarly works to make a controlled $R_z(\theta)$. A controlled $R_x(\theta)$ could similarly be made using CNOT gates.We can also make a controlled version of any single-qubit rotation, $U$. For this we simply need to find three rotations A, B and C, and a phase $\alpha$ such that$$ABC = I, ~~~e^{i\alpha}AZBZC = U$$We then use controlled-Z gates to cause the first of these relations to happen whenever the control is in state $|0\rangle$, and the second to happen when the control is state $|1\rangle$. An $R_z(2\alpha)$ rotation is also used on the control to get the right phase, which will be important whenever there are superposition states.```pythonqc.append(a, [t])qc.cz(c,t)qc.append(b, [t])qc.cz(c,t)qc.append(c, [t])qc.u1(alpha,c)```![A controlled version of a gate V](https://s3.us-south.cloud-object-storage.appdomain.cloud/strapi/4efe86a907a64a59a720b4dc54a98a88iden1.png)Here `A`, `B` and `C` are gates that implement $A$ , $B$ and $C$, respectively, and must be defined as custom gates. For example, if we wanted $A$ to be $R_x(\pi/4)$, the custom would be defined as```pythonqc_a = QuantumCircuit(1, name='A')qc_a.rx(np.pi/4,0)A = qc_a.to_instruction()``` The Toffoli The Toffoli gate is a three-qubit gate with two controls and one target. It performs an X on the target only if both controls are in the state $|1\rangle$. The final state of the target is then equal to either the AND or the NAND of the two controls, depending on whether the initial state of the target was $|0\rangle$ or $|1\rangle$. A Toffoli can also be thought of as a controlled-controlled-NOT, and is also called the CCX gate.```python Toffoli with control qubits a and b and target tqc.ccx(a,b,t)```To see how to build it from single- and two-qubit gates, it is helpful to first show how to build something even more general: an arbitrary controlled-controlled-U for any single-qubit rotation U. For this we need to define controlled versions of $V = \sqrt{U}$ and $V^\dagger$. In the code below, we assume that subroutines `cv` and `cvdg` have been defined for these, respectively. The controls are qubits $a$ and $b$, and the target is qubit $t$.```pythonqc.cv(b,t)qc.cx(a,b)qc.cvdg(b,t)qc.cx(a,b)qc.cv(a,t)```![A doubly controlled version of a gate V](https://s3.us-south.cloud-object-storage.appdomain.cloud/strapi/693974b222d24dba9111e02ae25e9151iden2.png)By tracing through each value of the two control qubits, you can convince yourself that a U gate is applied to the target qubit if and only if both controls are 1. Using ideas we have already described, you could now implement each controlled-V gate to arrive at some circuit for the doubly-controlled-U gate. It turns out that the minimum number of CNOT gates required to implement the Toffoli gate is six [2].![A Toffoli](https://s3.us-south.cloud-object-storage.appdomain.cloud/strapi/b3cbeb9b7d674d60a75bed351e4f2bcbiden3.png)The Toffoli is not the unique way to implement an AND gate in quantum computing. We could also define other gates that have the same effect, but which also introduce relative phases. In these cases, we can implement the gate with fewer CNOTs.For example, suppose we use both the controlled-Hadamard and controlled-$Z$ gates, which can both be implemented with a single CNOT. With these we can make the following circuit:```pythonqc.ch(a,t)qc.cz(b,t)qc.ch(a,t)```For the state $|00\rangle$ on the two controls, this does nothing to the target. For $|11\rangle$, the target experiences a $Z$ gate that is both preceded and followed by an H. The net effect is an $X$ on the target. For the states $|01\rangle$ and $|10\rangle$, the target experiences either just the two Hadamards \(which cancel each other out\) or just the $Z$ \(which only induces a relative phase\). This therefore also reproduces the effect of an AND, because the value of the target is only changed for the $|11\rangle$ state on the controls -- but it does it with the equivalent of just three CNOT gates. Arbitrary rotations from H and T The qubits in current devices are subject to noise, which basically consists of gates that are done by mistake. Simple things like temperature, stray magnetic fields or activity on neighboring qubits can make things happen that we didn't intend.For large applications of quantum computers, it will be necessary to encode our qubits in a way that protects them from this noise. This is done by making gates much harder to do by mistake, or to implement in a manner that is slightly wrong.This is unfortunate for the single-qubit rotations $R_x(\theta)$, $R_y(\theta)$ and $R_z(\theta)$. It is impossible to implent an angle $\theta$ with perfect accuracy, such that you are sure that you are not accidentally implementing something like $\theta + 0.0000001$. There will always be a limit to the accuracy we can achieve, and it will always be larger than is tolerable when we account for the build-up of imperfections over large circuits. We will therefore not be able to implement these rotations directly in fault-tolerant quantum computers, but will instead need to build them in a much more deliberate manner.Fault-tolerant schemes typically perform these rotations using multiple applications of just two gates: $H$ and $T$.The T gate is expressed in Qiskit as```pythonqc.t(0) T gate on qubit 0```It is a rotation around the z axis by $\theta = \pi/4$, and so is expressed mathematically as $R_z(\pi/4) = e^{i\pi/8~Z}$.In the following we assume that the $H$ and $T$ gates are effectively perfect. This can be engineered by suitable methods for error correction and fault-tolerance.Using the Hadamard and the methods discussed in the last chapter, we can use the T gate to create a similar rotation around the x axis.```pythonqc.h(0)qc.t(0)qc.h(0)```Now let's put the two together. Let's make the gate $R_z(\pi/4)~R_x(\pi/4)$.```pythonqc.h(0)qc.t(0)qc.h(0)qc.t(0)```Since this is a single-qubit gate, we can think of it as a rotation around the Bloch sphere. That means that it is a rotation around some axis by some angle. We don't need to think about the axis too much here, but it clearly won't be simply x, y or z. More important is the angle.The crucial property of the angle for this rotation is that it is irrational. You can prove this yourself with a bunch of math, but you can also see the irrationality in action by applying the gate. Repeating it $n$ times results in a rotation around the same axis by a different angle. Due to the irrationality, the angles that result from different repetitions will never be the same.We can use this to our advantage. Each angle will be somewhere between $0$ and $2\pi$. Let's split this interval up into $n$ slices of width $2\pi/n$. For each repetition, the resulting angle will fall in one of these slices. If we look at the angles for the first $n+1$ repetitions, it must be true that at least one slice contains two of these angles. Let's use $n_1$ to denote the number of repetitions required for the first, and $n_2$ for the second.With this, we can prove something about the angle for $n_2-n_1$ repetitions. This is effectively the same as doing $n_2$ repetitions, followed by the inverse of $n_1$ repetitions. Since the angles for these are not equal \(because of the irrationality\) but also differ by no greater than $2\pi/n$ \(because they correspond to the same slice\), the angle for $n_2-n_1$ repetitions satisfies$$\theta_{n_2-n_1} \neq 0, ~~~~-\frac{2\pi}{n} \leq \theta_{n_2-n_1} \leq \frac{2\pi}{n} .$$We therefore have the ability to do rotations around small angles. We can use this to rotate around angles that are as small as we like, just by increasing the number of times we repeat this gate.By using many small-angle rotations, we can also rotate by any angle we like. This won't always be exact, but it is guaranteed to be accurate up to $2\pi/n$, which can be made as small as we like. We now have power over the inaccuracies in our rotations.So far, we only have the power to do these arbitrary rotations around one axis. For a second axis, we simply do the $R_z(\pi/4)$ and $R_x(\pi/4)$ rotations in the opposite order.```pythonqc.h(0)qc.t(0)qc.h(0)qc.t(0)```The axis that corresponds to this rotation is not the same as that for the gate considered previously. We therefore now have arbitrary rotation around two axes, which can be used to generate any arbitrary rotation around the Bloch sphere. We are back to being able to do everything, though it costs quite a lot of $T$ gates.It is because of this kind of application that $T$ gates are so prominent in quantum computation. In fact, the complexity of algorithms for fault-tolerant quantum computers is often quoted in terms of how many $T$ gates they'll need. This motivates the quest to achieve things with as few $T$ gates as possible. Note that the discussion above was simply intended to prove that $T$ gates can be used in this way, and does not represent the most efficient method we know. References[1] [Barenco, *et al.* 1995](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.3457?cm_mc_uid=43781767191014577577895&cm_mc_sid_50200000=1460741020)[2] [Shende and Markov, 2009](http://dl.acm.org/citation.cfm?id=2011799) ###Code import qiskit qiskit.__qiskit_version__ ###Output _____no_output_____ ###Markdown Basic Circuit Identities ###Code from qiskit import * from qiskit.circuit import Gate ###Output _____no_output_____
10-Objects and Data Structures Assessment Test.ipynb
###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers:has 2 types(integer "2,5,..etc) float(2.3,1.9,-8.1,...etc)Strings:once the system see string it work with it as letters and words even if it contains a numberLists:elements putted on sequence and it is changeable ,elements can be any type of data and repeatable, it defined by []Tuples: is immutable list means can not change its elements it defined by ()Dictionaries:it is a list of mapping elemets ,elements can be any type of data and it defined by {} NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code (2*3)**0+(6/2)-1+97.25 ###Output _____no_output_____ ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5 ###Code 4 * (6 + 5) 4 * 6 + 5 4 + 6 * 5 ###Output _____no_output_____ ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4? float What would you use to find a number’s square root, as well as its square? ###Code # Square root: #number**0.5 100**0.5 # Square: #number**2 5**2 ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing s[1] ###Output _____no_output_____ ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s[::-1] ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: s[4] # Method 2: s[-1] ###Output _____no_output_____ ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: x=[3,'ahmed',5.2] type(x) # Method 2: ip="172.28.200.25" ip.split('.') ###Output _____no_output_____ ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2]='goodbye' list3 ###Output _____no_output_____ ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] sorted(list4) ###Output _____no_output_____ ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d = {'k1':{'k2':'hello'}} # Grab 'hello' # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not? Tuples What is the major difference between tuples and lists?tuples are immutable which can not change their elements How do you create a tuple? ###Code #tuples are charachtrized by () T=('find',4,'true') ###Output _____no_output_____ ###Markdown Sets What is unique about a set? unorder collection of unique elements Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] x=set(list5) x ###Output _____no_output_____ ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 #false # Answer before running cell 3 <= 2 #false # Answer before running cell 3 == 2.0 #false # Answer before running cell 3.0 == 3 #true # Answer before running cell 4**0.5 != 2 #false ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] #false ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: int(5)Strings:str('hello')Lists:list[1,2,3,4]Tuples:x=(1,2,3,4)Dictionaries:x = {'cat':23, 'dog':26} NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code x= 20*10/100+(2**7)+.25-30 print(x) ###Output 100.25 ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5 ###Code 44 29 34 x = 4*(6+5) y = 4 * 6 + 5 z = 4 + 6 * 5 print(x,y,z) ###Output 44 29 34 ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4? ###Code x= 3 + 1.5 + 4 type(x) ###Output _____no_output_____ ###Markdown What would you use to find a number’s square root, as well as its square? Square root:given x as a number the the square root as follows = x**0.5 Square:given x as a number the the square as follows = x**2 Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing print(s[1]) ###Output e ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing print(s[::-1]) ###Output olleh ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: print(s[4]) # Method 2: print(s[-1]) ###Output o ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: x= [0,0,0] print(x) # Method 2: x=("0"*3) print(list(x)) ###Output ['0', '0', '0'] ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2]='goodbye' print(list3[2][2]) ###Output goodbye ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] print(sorted(list4)) ###Output [1, 3, 4, 5, 6] ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' print(d['simple_key']) d = {'k1':{'k2':'hello'}} # Grab 'hello' print(d['k1']['k2']) # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello print(d['k1'][0]['nest_key'][1]) # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} print(d['k1'][-1]['k2'][-1]['tough'][-1]) ###Output ['hello'] ###Markdown Can you sort a dictionary? Why or why not? yes .. for example if we have dictionary x by using the code print(sorted(x)) Tuples What is the major difference between tuples and lists? tuples are immutable How do you create a tuple? ###Code x=1,2,3 x=(1,2,3) print(x) type(x) ###Output _____no_output_____ ###Markdown Sets What is unique about a set? every set element is not duplicated Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] print(set(list5)) ###Output {1, 2, 33, 4, 3, 11, 22} ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 #false # Answer before running cell 3 <= 2 #false # Answer before running cell 3 == 2.0 #false # Answer before running cell 3.0 == 3 #true # Answer before running cell 4**0.5 != 2 #false ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] #3>=4 #false ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: numbers in python could be defined as [int, float] Strings: are text and character inputs and could be defined as [str] Lists: are structured representation of series of objects defined by [] square brackets and could store [int, float, list, str, tuple, and etc]Tuples: it could store multiple data and defined by () and is hashable object like [int, bool, str]Dictionaries: coluld carry different data typer and defined by {key1: value1, ..., key_n: value_n} NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code print((4*10**2+3-2)/4) ###Output 100.25 ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5 > What is the value of the expression 4 * (6 + 5) = 44> What is the value of the expression 4 * 6 + 5 = 29> What is the value of the expression 4 + 6 * 5 = 34 What is the *type* of the result of the expression 3 + 1.5 + 4?> float What would you use to find a number’s square root, as well as its square? ###Code # Square root: print(100**0.5) # Square: print(10**2) ###Output 100 ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing print(s[1]) ###Output e ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s = 'hello'[::-1] ##i have googled this solution print(s) ###Output olleh ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: print(s[-1]) # Method 2: print(s[len(s)-1]) ###Output o ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: l = [0, 0 , 0] print(l) # Method 2: l = list((0, 0, 0)) print(l) ###Output [0, 0, 0] ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2] = 'goodbye' print(list3[2][2]) ###Output goodbye ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] list4.sort() print(list4) ###Output [1, 3, 4, 5, 6] ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d['simple_key'] d = {'k1':{'k2':'hello'}} # Grab 'hello' d['k1']['k2'] # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello d['k1'][0]['nest_key'][1][0] # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} d['k1'][2]['k2'][1]['tough'][2][0] ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not?Yes, you can sort a dict using > sorted(dict.items(), key = lambda x: x[1])here the sorted func will sort the items with respect to the value Tuples What is the major difference between tuples and lists?> immutaple "you can't change the value in the tuple" How do you create a tuple?> tuple_1 = (1, 2, 3) use () Sets What is unique about a set?> no vlaues could be repeated Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] print(set(list5)) ###Output {1, 2, 33, 4, 3, 11, 22} ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 # False # Answer before running cell 3 <= 2 # False # Answer before running cell 3 == 2.0 # False # Answer before running cell 3.0 == 3 # True # Answer before running cell 4**0.5 != 2 # False ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] # False ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: two basic types learned: 1. int is integer number which represent a whole number. 2. float is floating point number represent numbers with decimal pointStrings: Is a sequesnce of characters Lists: Ordered sequence of objectsTuples: ordered immutable sequence of objectsDictionaries: unordered key,value pairs : {key1:value1, key2:value2, ..} NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code 10**2 + 10.25 - 10 * 4 / 4 ###Output _____no_output_____ ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5 ###Code #1. 44 #2. 29 #3. 34 ###Output _____no_output_____ ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4?float What would you use to find a number’s square root, as well as its square? ###Code # Square root: # import math # math.sqrt(number) # Square: # number ** 2 ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing print(s[1]) ###Output e ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s[::-1] ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: s[4] # Method 2: s[-1] ###Output _____no_output_____ ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: my_list_1 = [0,0,0] # Method 2: my_list_2 = [] my_list_2.append(0) my_list_2.append(0) my_list_2.append(0) ###Output _____no_output_____ ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2] = 'goodbye' ###Output _____no_output_____ ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] list4.sort() ###Output _____no_output_____ ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d['simple_key'] d = {'k1':{'k2':'hello'}} # Grab 'hello' d['k1']['k2'] # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello d['k1'][0]['nest_key'][1][0] # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} d['k1'][2]['k2'][1]['tough'][2][0] ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not?dictionary is a hash map, so the main answer is not.but if want to sort by key or by value, and key or value respectively is sortable type, can sort it Tuples What is the major difference between tuples and lists?list is mutable and tuple is immutable How do you create a tuple?one way is using parentheses () and element separated by commasmy_tuple = (1, 2, 3) Sets What is unique about a set?every set element is unique Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] set5 = set(list5) print(set5) ###Output {1, 2, 33, 4, 3, 11, 22} ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 # False # Answer before running cell 3 <= 2 # False # Answer before running cell 3 == 2.0 # False # Answer before running cell 3.0 == 3 # True # Answer before running cell 4**0.5 != 2 # False ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] # False ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: n Python, numeric data type represent the data which has numeric value. Numeric value can be: Integer or Floating numbers, even Complex numbers. These values arre defined as int, floatand complexclass in Python. We can use the type() function, to know which class a variable or a value belongs to. And the isinstance() function, to check if an object belongs to a particular class. You can convert from one type to another with the int(), float(), and complex() methods.Strings:The string type in Python, is represented by strclass. Strings are immutable, which means that their content cannot be altered, after their creation. Like many other popular programming languages, strings in Python are arrays of bytes – which represents unicode characters. However, Python does not have a character data type, a single character is also considered as a string – with a length of 1!Lists: A list is one of the most used datatype in Python, and is very flexible. It is a collection of items, which is ordered, and mutable. Which means that values of a list can be altered, even after its creation. All the elements in a list are indexed according to a definite sequence. The indexing of a list is done with 0 being the first index. Each element in the list has its own place, which allows duplicating of elements. Lists in Python are just like the arrays, declared in other languages. All the items in a list do not need to be of the same type, which makes one of the most powerful tool in Python. Tuples: Tuple is an ordered collection of Python objects, much like a List. The sequence of values stored in a tuple can be of any type, and they are indexed by integers, which allows duplicate members. The important difference between a list and a tuple, is that tuples are immutable. Tuples, once created, cannot be modified. Tuples are used to write-protect data, and are usually faster than list – as it cannot change dynamically.Dictionaries: Dictionary in Python is an unordered and indexed collection of values, used to store data. Unlike other Data Types, that hold only single value as an element, dictionary holds key:valuepair. Key-value is provided in the dictionary to make it more optimized for retrieving data. It is generally used when you have a huge amount of data, since you must know the key, to retrieve the value – but not the other way around! NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code x = 1.25 + (5**2*8/2) - 1 print(x) ###Output 100.25 ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) = 44 What is the value of the expression 4 * 6 + 5 = 29 What is the value of the expression 4 + 6 * 5 = 34 ###Code x = 4 * (6 + 5) y = 4 * 6 + 5 z = 4 + 6 * 5 print(x,y,z) ###Output 44 29 34 ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4?float What would you use to find a number’s square root, as well as its square? ###Code # Square root: 16**(0.5) # Square: 4**2 ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing s[1] ###Output _____no_output_____ ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s[::-1] ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: print(s[-1]) # Method 2: print(s[4]) ###Output o ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: mylist = [0,0,0] print(mylist) # Method 2: [0]*3 ###Output _____no_output_____ ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2].pop(2) list3[2].append('goodbye') print(list3) ###Output [1, 2, [3, 4, 'goodbye']] ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] list4.sort() print(list4) ###Output [1, 3, 4, 5, 6] ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d['simple_key'] d = {'k1':{'k2':'hello'}} # Grab 'hello' d['k1']['k2'] # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello d['k1'][0]['nest_key'][1] # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} d['k1'][2]['k2'][1]['tough'][2] ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not?Dictionaries are mappings not sequences. so cant be orderded Tuples What is the major difference between tuples and lists?tuples immutable. How do you create a tuple?parantheses cruve brackets Sets What is unique about a set?only one item of each Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] x = set(list5) print(x) ###Output {1, 2, 33, 4, 3, 11, 22} ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 #false # Answer before running cell 3 <= 2 #false # Answer before running cell 3 == 2.0 #false # Answer before running cell 3.0 == 3 #true # Answer before running cell 4**0.5 != 2 #false ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] #true ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers:is Numeric value that can be integer, floating number or complex numbersStrings:sequence of charactes Lists:used to store different data types viriabls in single variable called a listTuples:is a list data type but it is immutableDictionaries:is used to store multiple data types in paires , key and value, and it does not allow duplicates NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code (75-5**2+75/600)*2 ###Output _____no_output_____ ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) = 44 What is the value of the expression 4 * 6 + 5 =29 What is the value of the expression 4 + 6 * 5 = 34 ###Code ###Output _____no_output_____ ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4? the type is float What would you use to find a number’s square root, as well as its square? ###Code # Square root:sqrt() # Square:**2 ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing s[1] ###Output _____no_output_____ ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s[::-1] ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: s[4] # Method 2: x=s[::-1][0] x[0] ###Output _____no_output_____ ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: s= [0,0,0] s # Method 2: s[0]=0 s[1]=0 s[2]=0 s ###Output _____no_output_____ ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2]='goodbye' list3 ###Output _____no_output_____ ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] list4.sort() list4 ###Output _____no_output_____ ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d.get('simple_key') d = {'k1':{'k2':'hello'}} # Grab 'hello' d.get('k1', {}).get('k2') # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello d.get('k1', {}) list =d.get('k1', {}) l=list[0].get('nest_key')[1][0] l # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not?It is not possible to sort a dictionary Tuples What is the major difference between tuples and lists?lists can be changed, and tuples cannot be changed How do you create a tuple?t= ("a", "b", "c") Sets What is unique about a set?does not allow duplicat values Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] set(list5) ###Output _____no_output_____ ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell #Answer:False 2 > 3 # Answer before running cell #Answer:False 3 <= 2 # Answer before running cell #Answer:False 3 == 2.0 # Answer before running cell #Answer:True 3.0 == 3 # Answer before running cell #Answer:False 4**0.5 != 2 ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] #Answer:False ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: int , float Strings: str any thing between "" or ''Lists:[] it has opjects we can apply changes on itTuples: () its fixed low space can't apply changes on itDictionaries: 'key' : 'value' store all types of inputs (list ,int , matrix tuples, .. ) NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code ((8**2)+(200.50-(8*8)))/2 ###Output _____no_output_____ ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) 44 What is the value of the expression 4 * 6 + 5 29 What is the value of the expression 4 + 6 * 5 34 ###Code print(4 * (6+5)) print(4 * 6 + 5) print(4 + 6 * 5) ###Output 44 29 34 ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4?float What would you use to find a number’s square root, as well as its square? ###Code # Square root: ##()**0.5 4**0.5 # Square: ##()**2 4**2 ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing s[1] ###Output _____no_output_____ ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s[::-1] ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: s[-1] # Method 2: s[4] ###Output _____no_output_____ ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: x = [0, 0, 0] x # Method 2: x = list(0, 0, 0) x ###Output _____no_output_____ ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2] = 'goodbye' list3 ###Output _____no_output_____ ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] sorted(list4) ###Output _____no_output_____ ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d ['simple_key'] d = {'k1':{'k2':'hello'}} # Grab 'hello' d['k1']['k2'] # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello d['k1'][0]['nest_key'][1] # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} d['k1'][2]['k2'][1]['tough'][2] ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not?no , only sort item within the dict or values Tuples What is the major difference between tuples and lists?tuples are fixed values ###Code ##How do you create a tuple?<br><br> x= (1, 2, 3) x ###Output _____no_output_____ ###Markdown Sets What is unique about a set?easy way to do mathematical operations Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] x = set(list5) x ###Output _____no_output_____ ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell false 2 > 3 # Answer before running cell false 3 <= 2 # Answer before running cell false 3 == 2.0 # Answer before running cell true 3.0 == 3 # Answer before running cell false 4**0.5 != 2 ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists 3 >= 4 false l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers:There are three numeric types in Python (int,float,complex). Strings:Strings in python are surrounded by either single quotation marks(''), or double quotation marks(""),and could be defined as (str). Lists:Lists are used to store multiple items in a single variable.Lists are created using square brackets[]. Tuples:Tuples are used to store multiple items in a single variable.A tuple is a collection which is ordered and unchangeable.Tuples are written with round brackets (). Dictionaries:Dictionaries are used to store data values in key:value pairs.Dictionaries are written with curly brackets, and have keys and values {}. NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code x = ((5**(2*2))/(2+3)-24.75) x ###Output _____no_output_____ ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5 > 4 * (6 + 5) = 44 > 4 * 6 + 5 = 29> 4 + 6 * 5 = 34 ###Code print(4 * (6 + 5)) print(4 * 6 + 5) print(4 + 6 * 5) ###Output 44 29 34 ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4? >> float What would you use to find a number’s square root, as well as its square? ###Code # Square root: n = float(input("Enter Nummber :")) r = n**.5 print(f"Square root = {r}") # Square: n = float(input("Enter Nummber :")) s = n**2 print(f"Square = {s}") ###Output Enter Nummber :7 Square = 49.0 ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing print(s[1]) ###Output e ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing print(s[::-1]) ###Output olleh ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: print(s[-1]) # Method 2: print(s[4]) ###Output o ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: my_list = [0,0,0] print(my_list) # Method 2: my_list = list((0,0,0)) print(my_list) ###Output [0, 0, 0] ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2]='goodbye' print(list3) ###Output [1, 2, [3, 4, 'goodbye']] ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] list4.sort() print(list4) ###Output [1, 3, 4, 5, 6] ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' print(d['simple_key']) d = {'k1':{'k2':'hello'}} # Grab 'hello' print(d['k1']['k2']) # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello print(d['k1'][0]['nest_key'][1][0]) # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} print(d['k1'][2]['k2'][1]['tough'][2][0]) ###Output hello ###Markdown Can you sort a dictionary? Why or why not? YESDictionaries are unordered data structures. They use a mapping structure to store data. Dictionaries map keys to values, creating pairs that hold related data.Using the Python sorted() method, you can sort the contents of a dictionary by value. Tuples What is the major difference between tuples and lists?> Tuple items are ordered, unchangeable, and allow duplicate values. How do you create a tuple?> my_tuple = ("apple", "banana", "cherry") Sets What is unique about a set?> Sets cannot have two items with the same value. Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] my_set = set(list5) print(my_set) ###Output {1, 2, 33, 4, 3, 11, 22} ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 #false # Answer before running cell 3 <= 2 #false # Answer before running cell 3 == 2.0 #false # Answer before running cell 3.0 == 3 #true # Answer before running cell 4**0.5 != 2 #false ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] #false ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers:Strings:Lists:Tuples:Dictionaries: NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5 What is the *type* of the result of the expression 3 + 1.5 + 4? What would you use to find a number’s square root, as well as its square? ###Code # Square root: # Square: ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing ###Output _____no_output_____ ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: # Method 2: ###Output _____no_output_____ ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: # Method 2: ###Output _____no_output_____ ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] ###Output _____no_output_____ ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] ###Output _____no_output_____ ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d = {'k1':{'k2':'hello'}} # Grab 'hello' # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not? Tuples What is the major difference between tuples and lists? How do you create a tuple? Sets What is unique about a set? Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] ###Output _____no_output_____ ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 # Answer before running cell 3 <= 2 # Answer before running cell 3 == 2.0 # Answer before running cell 3.0 == 3 # Answer before running cell 4**0.5 != 2 ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: can be integer or floatStrings:any value between quotes or two quotesLists: squence of values of any type numbers, strings,mix or even lists (in that case will be called Matrix), created using square brackets '[]'. List is editable(append and remove can be applied).Tuples: Sequence of values of any type numbers, strings, or mix, created using round brackets().Tuples are immutable which means can not be edited (values can not be changed).Dictionaries:Sequence of elements consists of key and value. The value can be number, string,list, or dictionary. They are created using curl brackets {}. Dictionaries are editable, objects can be updated, deleted, and inserted. NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code 5**4*2/5-150+0.25 ###Output _____no_output_____ ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) = 44 What is the value of the expression 4 * 6 + 5 = 29 What is the value of the expression 4 + 6 * 5 = 34 ###Code 4 * (6 + 5) 4 * 6 + 5 4 + 6 * 5 ###Output _____no_output_____ ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4?= 8.5 ###Code 3 + 1.5 + 4 ###Output _____no_output_____ ###Markdown What would you use to find a number’s square root, as well as its square? ###Code # Square root: use **0.5 #Example: 4**0.5 # Square: use **2 #Example: 2**2 ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing s[1] ###Output _____no_output_____ ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s[::-1] ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: s[4] # Method 2: s[-1] ###Output _____no_output_____ ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: s1=[0,0,0] s1 # Method 2: s2=[] s2.append(0) s2.append(0) s2.append(0) s2 ###Output _____no_output_____ ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2]='goodbye' list3 ###Output _____no_output_____ ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] list4.sort() list4 ###Output _____no_output_____ ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d['simple_key'] d = {'k1':{'k2':'hello'}} # Grab 'hello' d['k1']['k2'] # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello d['k1'][0]['nest_key'][1][0] # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} d['k1'][2]['k2'][1]['tough'][2][0] ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not?Keys order, can be sorted and values as well but not the whole dictionary. I believe because each element of the dictionary is indexed through key, and value is flexible in type, meaning it can be number, string , list , or even a dictionary. So, there will be no base for sorting different types of values. Tuples What is the major difference between tuples and lists?lists are mutable while tuples are not How do you create a tuple?using round brackets and assign valueexample: t=(2,4,6,8) Sets What is unique about a set? set is created with unique, non-repeated values Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] s = set(list5) s ###Output _____no_output_____ ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell = false 2 > 3 # Answer before running cell = false 3 <= 2 # Answer before running cell = false 3 == 2.0 # Answer before running cell = true 3.0 == 3 # Answer before running cell = false 4**0.5 != 2 ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? = false l_one[2][0] >= l_two[2]['k1'] ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: int , floatStrings: strLists: listTuples:tupleDictionaries: dict NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code x = 5 y = 10 [x+y , x*y , x/y , x%y , x//y , x**y] ###Output _____no_output_____ ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5 ###Code {'first expression': 44 , 'second expression': 29 , 'third expression': 34} print(4*(6+5)) print(4*6+5) print(4+6*5) ###Output 44 29 34 ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4?float What would you use to find a number’s square root, as well as its square? ###Code # Square root: {} # Square: [] ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexin s[0] + s[2::] ###Output _____no_output_____ ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s[::-1] ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' # Method 1: s[:4] # Method 2: s = 'hello' ###Output _____no_output_____ ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: list1 = [0,0,0] list1 # Method 2: ###Output _____no_output_____ ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2] ###Output _____no_output_____ ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] sorted(list4) ###Output _____no_output_____ ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d['simple_key'] d = {'k1':{'k2':'hello'}} # Grab 'hello' d['k1']['k2'] # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello d['k1'][0]['nest_key'][1][0] # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} sorted(d) ###Output _____no_output_____ ###Markdown Can you sort a dictionary? Why or why not?yes , when dictionary has same data tybe can sorted Tuples What is the major difference between tuples and lists?() ,[] What is unique about a set?it hs uniqe values (not repeat value), METHODS , add delet value Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] set(list5) ###Output _____no_output_____ ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 # fales # Answer before running cell 3 <= 2 #fales # Answer before running cell 3 == 2.0 #fales # Answer before running cell 3.0 == 3 # true # Answer before running cell 4**0.5 != 2 # fales ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] #fales ###Output _____no_output_____ ###Markdown Objects and Data Structures Assessment Test Test your knowledge. **Answer the following questions** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: it's numeric data type which used to represent any type of number (int, Float or Complex number) and boolean is asubtype of int* Int can by any length while float has only 15 as deciaml places Strings: it's used to store text information furthermore String as adata type is immutable Lists:Lists used to store collections of homogeneous items regdaless the data type and it's mutable Tuples:it's like List but it's immutable and used to keep data that wouldn't changedDictionaries: it's amapping object which link ahash value to an object, it's immutable, it caontain Key and value NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25 ###Code print(100 + 0.25) print(100.5 - 0.25) print((20*5)+0.25) print(200/2+1-.05) ###Output _____no_output_____ ###Markdown Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) = 44 What is the value of the expression 4 * 6 + 5 = 29 What is the value of the expression 4 + 6 * 5 = 34 ###Code print(4 * (6 + 5)) print(4 * 6 + 5 ) print(4 + 6 * 5) ###Output _____no_output_____ ###Markdown What is the *type* of the result of the expression 3 + 1.5 + 4?Float What would you use to find a number’s square root, as well as its square? ###Code # Square root: import math x = 25**2 print('the Square root of {} is {}'.format(x,math.sqrt(x))) # Square: x = 25**2 print(x) x = pow(x,2) print(x) ###Output _____no_output_____ ###Markdown Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below: ###Code s = 'hello' # Print out 'e' using indexing l = input("Please enter the letter you search for: ") if l in s: print(s[s.find(l)]) else: print(f'Letter {l} not existed at String {s}') ###Output Please enter the letter you search for: KK Letter KK not existed at String hello ###Markdown Reverse the string 'hello' using slicing: ###Code s ='hello' # Reverse the string using slicing s[::-1] ###Output _____no_output_____ ###Markdown Given the string hello, give two methods of producing the letter 'o' using indexing. ###Code s ='hello' # Print out the 'o' s[-1:] # Method 1: # Method 2: s[len(s)-1:] ###Output _____no_output_____ ###Markdown Lists Build this list [0,0,0] two separate ways. ###Code # Method 1: li = [0,0,0] print(li) # Method 2: li = [0 for x in range(10)] print(li) ###Output [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ###Markdown Reassign 'hello' in this nested list to say 'goodbye' instead: ###Code list3 = [1,2,[3,4,'hello']] list3[2][2] = 'goodbye' list3 ###Output _____no_output_____ ###Markdown Sort the list below: ###Code list4 = [5,3,4,6,1] print(sorted(list4)) print(list4) print(sorted(list4,reverse=True)) ###Output [1, 3, 4, 5, 6] [5, 3, 4, 6, 1] [6, 5, 4, 3, 1] ###Markdown Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries: ###Code d = {'simple_key':'hello'} # Grab 'hello' d['simple_key'] d = {'k1':{'k2':'hello'}} # Grab 'hello' d['k1']['k2'] # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello d['k1'][0]['nest_key'][1][0] # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} d['k1'][2]['k2'][1]['tough'][2][0] di = {'mohamed':'10','Ali':'9','Ahmed':'15','alaa':'2','Fathi':'32'} di print(sorted(di.items())) dil=list(di) print(dil) print(type(dil)) ###Output ['mohamed', 'Ali', 'Ahmed', 'alaa', 'Fathi'] <class 'list'> ###Markdown Can you sort a dictionary? Why or why not?Yes Tuples What is the major difference between tuples and lists?List mutabletuples not How do you create a tuple?add data between () Sets What is unique about a set?it's unordered and can't access by index Use a set to find the unique values of the list below: ###Code list5 = [1,2,2,33,4,4,11,22,3,3,2] us = set (list5) print(us) {i:list5.count(i) for i in us} ###Output _____no_output_____ ###Markdown Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.&gt;If the value of left operand is greater than the value of right operand, then condition becomes true. (a &gt; b) is not true.&lt;If the value of left operand is less than the value of right operand, then condition becomes true. (a &lt; b) is true.&gt;=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a &gt;= b) is not true. &lt;=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a &lt;= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) ###Code # Answer before running cell 2 > 3 # Flase # Answer before running cell 3 <= 2 # Flase # Answer before running cell 3 == 2.0 # Flase # Answer before running cell 3.0 == 3 # True # Answer before running cell 4**0.5 != 2 # Flase ###Output _____no_output_____ ###Markdown Final Question: What is the boolean output of the cell block below? ###Code # two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] # True or False? l_one[2][0] >= l_two[2]['k1'] # Flase ###Output _____no_output_____
Seg and Clust Toronto part three.ipynb
###Markdown Peer-graded Assignment: Segmenting and Clustering Neighborhoods in Toronto Import Dependencies ###Code import pandas as pd %pip install lxml # Used at the creating of the notebook to support pandas read_html ###Output Collecting lxml [?25l Downloading https://files.pythonhosted.org/packages/ec/be/5ab8abdd8663c0386ec2dd595a5bc0e23330a0549b8a91e32f38c20845b6/lxml-4.4.1-cp36-cp36m-manylinux1_x86_64.whl (5.8MB)  |████████████████████████████████| 5.8MB 15.7MB/s eta 0:00:01 [?25hInstalling collected packages: lxml Successfully installed lxml-4.4.1 Note: you may need to restart the kernel to use updated packages. ###Markdown Read html data and assign to a dataframe. ###Code html_data = pd.read_html('http://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M') type(html_data) html_data ###Output _____no_output_____ ###Markdown Assign the table to a dataframe object. ###Code wiki_table = html_data[0] type(wiki_table) wiki_table.head() ###Output _____no_output_____ ###Markdown The dataframe will consist of three columns: PostalCode, Borough, and Neighborhood ###Code wiki_table.rename(columns={'Postcode': 'PostalCode', 'Neighbourhood': 'Neighborhood'}, inplace=True) ###Output _____no_output_____ ###Markdown Only process the cells that have an assigned borough. Ignore cells with a borough that is Not assigned. ###Code wiki_table = wiki_table[~wiki_table.Borough.str.contains("Not assigned")] wiki_table = wiki_table.reset_index(drop=True) wiki_table.head() ###Output _____no_output_____ ###Markdown More than one neighborhood can exist in one postal code area. For example, in the table on the Wikipedia page, you will notice that M5A is listed twice and has two neighborhoods: Harbourfront and Regent Park. These two rows will be combined into one row with the neighborhoods separated with a comma as shown in row 11 in the above table. ###Code wiki_table.columns wiki_table.head() wiki_unique_postal = wiki_table.groupby(['PostalCode', 'Borough'])['Neighborhood'].apply(','.join).reset_index() wiki_unique_postal.head(15) ###Output _____no_output_____ ###Markdown If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough. So for the 9th cell in the table on the Wikipedia page, the value of the Borough and the Neighborhood columns will be Queen's Park. ###Code wiki_unique_postal[wiki_unique_postal['Neighborhood'].str.contains('Not ass')] wiki_unique_postal.loc[wiki_unique_postal['Neighborhood']=='Not assigned', 'Neighborhood'] = wiki_unique_postal['Borough'] wiki_unique_postal[wiki_unique_postal['Neighborhood'].str.contains('Not ass')] wiki_unique_postal[wiki_unique_postal.PostalCode.str.contains('M7A')] wiki_unique_postal.shape %pip install geocoder import geocoder # import geocoder # Needs more testing to validate and build against return output. Potential need for input updates as well. # Turned given while loop into a function. def get_coords(postal_code): # initialize your variable to None lat_lng_coords = None # loop until you get the coordinates while(lat_lng_coords is None): g = geocoder.google('{}, Toronto, Ontario'.format(postal_code)) lat_lng_coords = g.latlng return lat_lng_coords[0], lat_lng_coords[1] # Could not get to work. #for ind in wiki_unique_postal.index: # lat, long = get_coords(wiki_unique_postal['PostalCode'][ind]) # wiki_unique_postal['Latitude'][ind] = lat # wiki_unique_postal['Longitude'][ind] = long ###Output _____no_output_____ ###Markdown Instead of using google I went ahead with the csv file. ###Code import csv coords_dict = {} with open('Geospatial_Coordinates.csv', newline='') as f: reader = csv.reader(f) for row in reader: if 'Postal Code' in row[0]: continue coords_dict.update({row[0]: [row[1], row[2]]}) coords_dict['M1B'][0] latitude = [] longitude = [] for ind in wiki_unique_postal.index: if wiki_unique_postal['PostalCode'][ind] in coords_dict: latitude.append(coords_dict[wiki_unique_postal['PostalCode'][ind]][0]) longitude.append(coords_dict[wiki_unique_postal['PostalCode'][ind]][1]) wiki_unique_postal.insert(3, "Latitude", latitude, True) wiki_unique_postal.head() wiki_unique_postal.insert(4, "Longitude", longitude, True) wiki_unique_postal.head(11) ###Output _____no_output_____ ###Markdown Beggining of Part 3 This section follows the same analysis that was performed on the New York City Data. Begin data review to filter dataframe for processing. ###Code wiki_unique_postal[wiki_unique_postal['Borough'].str.contains('Toronto')].info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 38 entries, 37 to 87 Data columns (total 5 columns): PostalCode 38 non-null object Borough 38 non-null object Neighborhood 38 non-null object Latitude 38 non-null object Longitude 38 non-null object dtypes: object(5) memory usage: 1.8+ KB ###Markdown Convert string objects in Latitude and Longitude to floats. ###Code wiki_unique_postal = wiki_unique_postal.astype({'Latitude': float, 'Longitude': float}) wiki_unique_postal[wiki_unique_postal['Borough'].str.contains('Toronto')].info() wiki_unique_postal.head() ###Output _____no_output_____ ###Markdown Filters Borough's based on the string containing Toronto. ###Code toronto_df = wiki_unique_postal[wiki_unique_postal['Borough'].str.contains('Toronto')].reset_index() toronto_df.head() toronto_df.Borough.unique() ###Output _____no_output_____ ###Markdown Load Dependencies for plotting and further data processing. ###Code # Importing dependencies import numpy as np # library to handle data in a vectorized manner import pandas as pd # library for data analsysis pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) import json # library to handle JSON files !conda install -c conda-forge geopy --yes # uncomment this line if you haven't completed the Foursquare API lab from geopy.geocoders import Nominatim # convert an address into latitude and longitude values import requests # library to handle requests from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe # Matplotlib and associated plotting modules import matplotlib.cm as cm import matplotlib.colors as colors # import k-means from clustering stage from sklearn.cluster import KMeans #!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab import folium # map rendering library print('Libraries imported.') address = 'Toronto, Ontario' geolocator = Nominatim(user_agent="ny_explorer") location = geolocator.geocode(address) latitude = location.latitude longitude = location.longitude print('The geograpical coordinate of Toronto, Ontario are {}, {}.'.format(latitude, longitude)) ###Output The geograpical coordinate of Toronto, Ontario are 43.653963, -79.387207. ###Markdown Build Map plotting neighborhoods. ###Code # create map of Toronto using latitude and longitude values map_toronto = folium.Map(location=[latitude, longitude], zoom_start=11) # add markers to map for lat, lng, label in zip(toronto_df['Latitude'], toronto_df['Longitude'], toronto_df['Neighborhood']): label = folium.Popup(label, parse_html=True) folium.CircleMarker( [lat, lng], radius=5, popup=label, color='blue', fill=True, fill_color='#3186cc', fill_opacity=0.7, parse_html=False).add_to(map_toronto) map_toronto CLIENT_ID = 'fill in' # your Foursquare ID CLIENT_SECRET = 'fill in' # your Foursquare Secret VERSION = '20180605' # Foursquare API version LIMIT = 100 # limit of number of venues returned by Foursquare API radius = 500 # define radius ###Output _____no_output_____ ###Markdown Queries foursquares api and pulls 100 venues based on a radius of 500. Function for the api call and handles the returned request data. ###Code def getNearbyVenues(names, latitudes, longitudes, radius=500): venues_list=[] for name, lat, lng in zip(names, latitudes, longitudes): print(name) # create the API request URL url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, LIMIT) # make the GET request results = requests.get(url).json()["response"]['groups'][0]['items'] # return only relevant information for each nearby venue venues_list.append([( name, lat, lng, v['venue']['name'], v['venue']['location']['lat'], v['venue']['location']['lng'], v['venue']['categories'][0]['name']) for v in results]) nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list]) nearby_venues.columns = ['Neighborhood', 'Neighborhood Latitude', 'Neighborhood Longitude', 'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category'] return(nearby_venues) ###Output _____no_output_____ ###Markdown Executing the function call on all neighborhoods in the toronto_df dataframe. ###Code toronto_venues = getNearbyVenues(names=toronto_df['Neighborhood'], latitudes=toronto_df['Latitude'], longitudes=toronto_df['Longitude'] ) ###Output The Beaches The Danforth West,Riverdale The Beaches West,India Bazaar Studio District Lawrence Park Davisville North North Toronto West Davisville Moore Park,Summerhill East Deer Park,Forest Hill SE,Rathnelly,South Hill,Summerhill West Rosedale Cabbagetown,St. James Town Church and Wellesley Harbourfront,Regent Park Ryerson,Garden District St. James Town Berczy Park Central Bay Street Adelaide,King,Richmond Harbourfront East,Toronto Islands,Union Station Design Exchange,Toronto Dominion Centre Commerce Court,Victoria Hotel Roselawn Forest Hill North,Forest Hill West The Annex,North Midtown,Yorkville Harbord,University of Toronto Chinatown,Grange Park,Kensington Market CN Tower,Bathurst Quay,Island airport,Harbourfront West,King and Spadina,Railway Lands,South Niagara Stn A PO Boxes 25 The Esplanade First Canadian Place,Underground city Christie Dovercourt Village,Dufferin Little Portugal,Trinity Brockton,Exhibition Place,Parkdale Village High Park,The Junction South Parkdale,Roncesvalles Runnymede,Swansea Business Reply Mail Processing Centre 969 Eastern ###Markdown Review data and transfrom for K-means clustering ###Code print(toronto_venues.shape) toronto_venues.head() toronto_venues.groupby('Neighborhood').count() print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique()))) # Pulled and modified from DP0701EN-3-3-2 # one hot encoding toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="") # add neighborhood column back to dataframe toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood'] # move neighborhood column to the first column fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1]) toronto_onehot = toronto_onehot[fixed_columns] toronto_onehot.head() toronto_onehot.shape toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index() toronto_grouped toronto_grouped.shape num_top_venues = 5 for hood in toronto_grouped['Neighborhood']: print("----"+hood+"----") temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index() temp.columns = ['venue','freq'] temp = temp.iloc[1:] temp['freq'] = temp['freq'].astype(float) temp = temp.round({'freq': 2}) print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues)) print('\n') # Function that sort's the venues in descending order. def return_most_common_venues(row, num_top_venues): row_categories = row.iloc[1:] row_categories_sorted = row_categories.sort_values(ascending=False) return row_categories_sorted.index.values[0:num_top_venues] ###Output _____no_output_____ ###Markdown Call's return_most_common_venues sorting and splicing the top 10 venues for each neighbohood. ###Code num_top_venues = 10 indicators = ['st', 'nd', 'rd'] # create columns according to number of top venues columns = ['Neighborhood'] for ind in np.arange(num_top_venues): try: columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind])) except: columns.append('{}th Most Common Venue'.format(ind+1)) # create a new dataframe neighborhood_venues_sorted = pd.DataFrame(columns=columns) neighborhood_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood'] for ind in np.arange(toronto_grouped.shape[0]): neighborhood_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues) neighborhood_venues_sorted.head() ###Output _____no_output_____ ###Markdown Build and run K-means clustering. ###Code # set number of clusters kclusters = 17 toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1) # run k-means clustering kmeans = KMeans(init='k-means++', n_clusters=kclusters, random_state=0, n_init=12).fit(toronto_grouped_clustering) # check cluster labels generated for each row in the dataframe kmeans.labels_[0:10] # add clustering labels neighborhood_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_) toronto_merged = toronto_df # merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood toronto_merged = toronto_merged.join(neighborhood_venues_sorted.set_index('Neighborhood'), on='Neighborhood') toronto_merged.tail() # check the last columns! ###Output _____no_output_____ ###Markdown Builds map with the K-means clustered data. ###Code # create map map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11) # set color scheme for the clusters x = np.arange(kclusters) ys = [i + x + (i*x)**2 for i in range(kclusters)] colors_array = cm.rainbow(np.linspace(0, 1, len(ys))) rainbow = [colors.rgb2hex(i) for i in colors_array] # add markers to the map markers_colors = [] for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']): label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True) folium.CircleMarker( [lat, lon], radius=5, popup=label, color=rainbow[cluster-1], fill=True, fill_color=rainbow[cluster-1], fill_opacity=0.7).add_to(map_clusters) map_clusters toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 1, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 2, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 3, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 4, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 5, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 6, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 7, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 8, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 9, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 10, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 11, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 12, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 13, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 14, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 15, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 16, toronto_merged.columns[[2] + list(range(5, toronto_merged.shape[1]))]] ###Output _____no_output_____
Modulo8/Kata8.ipynb
###Markdown Ejercicio: Crear y modificar un diccionario de Python ###Code planet = { 'name': 'Mars', 'moons': 2 } print(planet.values()) planet['circunferencia (km)'] = {'polar': 6752, 'equatorial': 6792} print(planet.get('name'),planet.get('circunferencia (km)')) ###Output Mars {'polar': 6752, 'equatorial': 6792} ###Markdown Ejercicio: Cálculo de valores ###Code planet_moons = { 'mercury': 0, 'venus': 0, 'earth': 1, 'mars': 2, 'jupiter': 79, 'saturn': 82, 'uranus': 27, 'neptune': 14, 'pluto': 5, 'haumea': 2, 'makemake': 1, 'eris': 1 } moons = planet_moons.values() total_moons = 0 for elem in moons: total_moons += elem total_moons /= len(moons) print(total_moons) ###Output 17.833333333333332
dqn/exercise/Deep_Q_Network.ipynb
###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() print(state) for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output [-5.9156417e-04 1.4134574e+00 -5.9935719e-02 1.1277095e-01 6.9228926e-04 1.3576316e-02 0.0000000e+00 0.0000000e+00] ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -158.06 Episode 200 Average Score: -147.32 Episode 300 Average Score: -139.96 Episode 400 Average Score: -137.19 Episode 500 Average Score: -144.97 Episode 600 Average Score: -131.44 Episode 700 Average Score: -131.23 Episode 800 Average Score: -135.32 Episode 900 Average Score: -127.63 Episode 1000 Average Score: -134.08 Episode 1100 Average Score: -134.44 Episode 1200 Average Score: -129.71 Episode 1300 Average Score: -134.01 Episode 1400 Average Score: -129.74 Episode 1500 Average Score: -132.39 Episode 1600 Average Score: -132.87 Episode 1700 Average Score: -134.48 Episode 1800 Average Score: -133.26 Episode 1900 Average Score: -127.53 Episode 2000 Average Score: -125.58 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -208.91 Episode 200 Average Score: -137.61 Episode 300 Average Score: -86.651 Episode 400 Average Score: -35.19 Episode 500 Average Score: -34.94 Episode 600 Average Score: 3.7445 Episode 700 Average Score: 170.15 Episode 800 Average Score: 90.061 Episode 900 Average Score: 185.83 Episode 1000 Average Score: 195.46 Episode 1012 Average Score: 200.28 Environment solved in 912 episodes! Average Score: 200.28 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -183.04 Episode 200 Average Score: -94.25 Episode 300 Average Score: -52.02 Episode 400 Average Score: 49.68 Episode 500 Average Score: 137.11 Episode 600 Average Score: 193.46 Episode 635 Average Score: 201.07 Environment solved in 535 episodes! Average Score: 201.07 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code # from dqn_agent import Agent # agent = Agent(state_size=8, action_size=4, seed=0) # # watch an untrained agent # state = env.reset() # for j in range(200): # action = agent.act(state) # env.render() # state, reward, done, _ = env.step(action) # if done: # break # env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn( n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995 ): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores from dqn_agent import Agent n_episodes=2000 max_t=1000 #Classic approach # agent = Agent(state_size=8, action_size=4, seed=0, ddqn=False ) # dqn_scores = dqn( n_episodes=n_episodes, max_t=max_t, eps_start=1.0, eps_end=0.01, eps_decay=0.995 ) # # plot the scores # fig = plt.figure() # ax = fig.add_subplot(111) # plt.plot(np.arange(len(dqn_scores)), dqn_scores) # plt.ylabel('Score') # plt.xlabel('Episode #') # plt.show() # agent = Agent(state_size=8, action_size=4, seed=0, ddqn=True ) # ddqn_scores = dqn( n_episodes=n_episodes, max_t=max_t, eps_start=1.0, eps_end=0.01, eps_decay=0.995 ) # # plot the scores # fig = plt.figure() # ax = fig.add_subplot(111) # plt.plot(np.arange(len(ddqn_scores)), ddqn_scores) # plt.plot(np.arange(len(dqn_scores)), dqn_scores) # plt.ylabel('Score') # plt.xlabel('Episode #') # plt.show() # prioritize replay testing prioritize_weights = [0.25, 0.5, 0.75, 1.0 ] scores = [] for a in prioritize_weights: #Classic approach print( "Testing with prioritize weight: ", a ) agent = Agent( state_size=8, action_size=4, seed=0, ddqn=False, init_td=1e-6, prioritize_weight=a, sampling_error_weight=0.5 ) score = dqn( n_episodes=n_episodes, max_t=max_t, eps_start=1.0, eps_end=0.01, eps_decay=0.995 ) scores.append( score ) # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(dqn_scores)), dqn_scores) for score in scores: plt.plot(np.arange(len(score)), score) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Testing with prioritize weight: 0.25 Episode 100 Average Score: -179.58 Episode 200 Average Score: -143.03 Episode 300 Average Score: -46.513 Episode 310 Average Score: -46.93 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) # watch an untrained agent for i in range(3): state = env.reset() for j in range(1000): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}\tepsilon: {:.2f}'.format(i_episode, np.mean(scores_window), eps), end="") if i_episode % 100 == 0: print('\nEpisode {}\tAverage Score: {:.2f}\tepsilon: {:.2f}'.format(i_episode, np.mean(scores_window), eps), end="") if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -146.28 epsilon: 0.61 Episode 200 Average Score: -81.15 epsilon: 0.374 Episode 300 Average Score: -73.42 epsilon: 0.22 Episode 400 Average Score: -30.92 epsilon: 0.13 Episode 500 Average Score: 9.71 epsilon: 0.0880 Episode 600 Average Score: 140.47 epsilon: 0.05 Episode 636 Average Score: 200.61 epsilon: 0.04 Environment solved in 536 episodes! Average Score: 200.61 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(6): state = env.reset() for j in range(300): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Collecting box2d Downloading https://files.pythonhosted.org/packages/cc/7b/ddb96fea1fa5b24f8929714ef483f64c33e9649e7aae066e5f5023ea426a/Box2D-2.3.2.tar.gz (427kB)  100% |████████████████████████████████| 430kB 2.5MB/s eta 0:00:01 [?25hInstalling collected packages: box2d Running setup.py install for box2d ... [?25ldone [?25hSuccessfully installed box2d-2.3.2 You are using pip version 9.0.3, however version 20.2.3 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Collecting pyvirtualdisplay Downloading https://files.pythonhosted.org/packages/d0/8a/643043cc70791367bee2d19eb20e00ed1a246ac48e5dbe57bbbcc8be40a9/PyVirtualDisplay-1.3.2-py2.py3-none-any.whl Collecting EasyProcess (from pyvirtualdisplay) Downloading https://files.pythonhosted.org/packages/48/3c/75573613641c90c6d094059ac28adb748560d99bd27ee6f80cce398f404e/EasyProcess-0.3-py2.py3-none-any.whl Installing collected packages: EasyProcess, pyvirtualdisplay Successfully installed EasyProcess-0.3 pyvirtualdisplay-1.3.2 ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) agent # watch an untrained agent state = env.reset() #img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) #img.set_data(env.render(mode='rgb_array')) #plt.axis('off') #display.display(plt.gcf()) #display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -208.91 Episode 200 Average Score: -138.30 Episode 300 Average Score: -68.535 Episode 400 Average Score: -72.05 Episode 500 Average Score: -38.25 Episode 600 Average Score: -43.42 Episode 700 Average Score: 94.690 Episode 800 Average Score: 129.09 Episode 896 Average Score: 112.54 ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output /home/brabeem/anaconda3/lib/python3.7/site-packages/ale_py/roms/utils.py:90: DeprecationWarning: SelectableGroups dict interface is deprecated. Use select. for external in metadata.entry_points().get(self.group, []): ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print("observation_space: ",env.observation_space) print("action_space: ",env.action_space) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output observation_space: Box([-inf -inf -inf -inf -inf -inf -inf -inf], [inf inf inf inf inf inf inf inf], (8,), float32) action_space: Discrete(4) State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code t = torch.tensor([[1, 2], [3, 4]]) torch.gather(t, 0, torch.tensor([[1], [0]])) np.vstack([1,2,3]) from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') #break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -98.75 Episode 200 Average Score: -24.86 Episode 300 Average Score: 72.077 Episode 400 Average Score: 165.46 Episode 453 Average Score: 200.02 Environment solved in 353 episodes! Average Score: 200.02 Episode 454 Average Score: 200.51 Environment solved in 354 episodes! Average Score: 200.51 Episode 455 Average Score: 202.01 Environment solved in 355 episodes! Average Score: 202.01 Episode 456 Average Score: 202.23 Environment solved in 356 episodes! Average Score: 202.23 Episode 457 Average Score: 201.97 Environment solved in 357 episodes! Average Score: 201.97 Episode 458 Average Score: 201.44 Environment solved in 358 episodes! Average Score: 201.44 Episode 459 Average Score: 201.65 Environment solved in 359 episodes! Average Score: 201.65 Episode 460 Average Score: 201.27 Environment solved in 360 episodes! Average Score: 201.27 Episode 461 Average Score: 203.22 Environment solved in 361 episodes! Average Score: 203.22 Episode 462 Average Score: 205.40 Environment solved in 362 episodes! Average Score: 205.40 Episode 463 Average Score: 205.36 Environment solved in 363 episodes! Average Score: 205.36 Episode 464 Average Score: 205.77 Environment solved in 364 episodes! Average Score: 205.77 Episode 465 Average Score: 204.28 Environment solved in 365 episodes! Average Score: 204.28 Episode 466 Average Score: 205.46 Environment solved in 366 episodes! Average Score: 205.46 Episode 467 Average Score: 208.21 Environment solved in 367 episodes! Average Score: 208.21 Episode 468 Average Score: 208.84 Environment solved in 368 episodes! Average Score: 208.84 Episode 469 Average Score: 208.25 Environment solved in 369 episodes! Average Score: 208.25 Episode 470 Average Score: 208.33 Environment solved in 370 episodes! Average Score: 208.33 Episode 471 Average Score: 208.56 Environment solved in 371 episodes! Average Score: 208.56 Episode 472 Average Score: 208.12 Environment solved in 372 episodes! Average Score: 208.12 Episode 473 Average Score: 208.15 Environment solved in 373 episodes! Average Score: 208.15 Episode 474 Average Score: 210.86 Environment solved in 374 episodes! Average Score: 210.86 Episode 475 Average Score: 213.39 Environment solved in 375 episodes! Average Score: 213.39 Episode 476 Average Score: 210.57 Environment solved in 376 episodes! Average Score: 210.57 Episode 477 Average Score: 211.60 Environment solved in 377 episodes! Average Score: 211.60 Episode 478 Average Score: 211.35 Environment solved in 378 episodes! Average Score: 211.35 Episode 479 Average Score: 210.33 Environment solved in 379 episodes! Average Score: 210.33 Episode 480 Average Score: 211.78 Environment solved in 380 episodes! Average Score: 211.78 Episode 481 Average Score: 212.30 Environment solved in 381 episodes! Average Score: 212.30 Episode 482 Average Score: 213.76 Environment solved in 382 episodes! Average Score: 213.76 Episode 483 Average Score: 214.05 Environment solved in 383 episodes! Average Score: 214.05 Episode 484 Average Score: 214.15 Environment solved in 384 episodes! Average Score: 214.15 Episode 485 Average Score: 214.06 Environment solved in 385 episodes! Average Score: 214.06 Episode 486 Average Score: 216.58 Environment solved in 386 episodes! Average Score: 216.58 Episode 487 Average Score: 216.79 Environment solved in 387 episodes! Average Score: 216.79 Episode 488 Average Score: 216.76 Environment solved in 388 episodes! Average Score: 216.76 Episode 489 Average Score: 216.57 Environment solved in 389 episodes! Average Score: 216.57 Episode 490 Average Score: 219.31 Environment solved in 390 episodes! Average Score: 219.31 Episode 491 Average Score: 218.80 Environment solved in 391 episodes! Average Score: 218.80 Episode 492 Average Score: 219.05 Environment solved in 392 episodes! Average Score: 219.05 Episode 493 Average Score: 219.04 Environment solved in 393 episodes! Average Score: 219.04 Episode 494 Average Score: 221.53 Environment solved in 394 episodes! Average Score: 221.53 Episode 495 Average Score: 225.16 Environment solved in 395 episodes! Average Score: 225.16 Episode 496 Average Score: 224.48 Environment solved in 396 episodes! Average Score: 224.48 Episode 497 Average Score: 223.87 Environment solved in 397 episodes! Average Score: 223.87 Episode 498 Average Score: 223.69 Environment solved in 398 episodes! Average Score: 223.69 Episode 499 Average Score: 223.79 Environment solved in 399 episodes! Average Score: 223.79 Episode 500 Average Score: 223.81 Environment solved in 400 episodes! Average Score: 223.81 Episode 501 Average Score: 224.10 Environment solved in 401 episodes! Average Score: 224.10 Episode 502 Average Score: 224.48 Environment solved in 402 episodes! Average Score: 224.48 Episode 503 Average Score: 224.49 Environment solved in 403 episodes! Average Score: 224.49 Episode 504 Average Score: 225.64 Environment solved in 404 episodes! Average Score: 225.64 Episode 505 Average Score: 225.57 Environment solved in 405 episodes! Average Score: 225.57 Episode 506 Average Score: 227.71 Environment solved in 406 episodes! Average Score: 227.71 Episode 507 Average Score: 228.00 Environment solved in 407 episodes! Average Score: 228.00 Episode 508 Average Score: 227.96 Environment solved in 408 episodes! Average Score: 227.96 Episode 509 Average Score: 228.25 Environment solved in 409 episodes! Average Score: 228.25 Episode 510 Average Score: 228.32 Environment solved in 410 episodes! Average Score: 228.32 Episode 511 Average Score: 228.27 Environment solved in 411 episodes! Average Score: 228.27 Episode 512 Average Score: 228.02 Environment solved in 412 episodes! Average Score: 228.02 Episode 513 Average Score: 229.69 Environment solved in 413 episodes! Average Score: 229.69 Episode 514 Average Score: 229.93 Environment solved in 414 episodes! Average Score: 229.93 Episode 515 Average Score: 229.61 Environment solved in 415 episodes! Average Score: 229.61 Episode 516 Average Score: 228.56 Environment solved in 416 episodes! Average Score: 228.56 Episode 517 Average Score: 227.93 Environment solved in 417 episodes! Average Score: 227.93 Episode 518 Average Score: 227.54 Environment solved in 418 episodes! Average Score: 227.54 Episode 519 Average Score: 227.20 Environment solved in 419 episodes! Average Score: 227.20 Episode 520 Average Score: 227.45 Environment solved in 420 episodes! Average Score: 227.45 Episode 521 Average Score: 227.62 Environment solved in 421 episodes! Average Score: 227.62 Episode 522 Average Score: 227.54 Environment solved in 422 episodes! Average Score: 227.54 Episode 523 Average Score: 227.33 Environment solved in 423 episodes! Average Score: 227.33 Episode 524 Average Score: 227.81 Environment solved in 424 episodes! Average Score: 227.81 Episode 525 Average Score: 227.51 Environment solved in 425 episodes! Average Score: 227.51 Episode 526 Average Score: 227.43 Environment solved in 426 episodes! Average Score: 227.43 Episode 527 Average Score: 220.31 Environment solved in 427 episodes! Average Score: 220.31 Episode 528 Average Score: 220.19 Environment solved in 428 episodes! Average Score: 220.19 Episode 529 Average Score: 219.89 Environment solved in 429 episodes! Average Score: 219.89 Episode 530 Average Score: 220.03 Environment solved in 430 episodes! Average Score: 220.03 Episode 531 Average Score: 219.87 Environment solved in 431 episodes! Average Score: 219.87 Episode 532 Average Score: 219.93 Environment solved in 432 episodes! Average Score: 219.93 Episode 533 Average Score: 220.21 Environment solved in 433 episodes! Average Score: 220.21 Episode 534 Average Score: 220.97 Environment solved in 434 episodes! Average Score: 220.97 Episode 535 Average Score: 221.07 Environment solved in 435 episodes! Average Score: 221.07 Episode 536 Average Score: 221.01 Environment solved in 436 episodes! Average Score: 221.01 Episode 537 Average Score: 220.91 Environment solved in 437 episodes! Average Score: 220.91 Episode 538 Average Score: 220.68 Environment solved in 438 episodes! Average Score: 220.68 Episode 539 Average Score: 220.71 Environment solved in 439 episodes! Average Score: 220.71 Episode 540 Average Score: 220.44 Environment solved in 440 episodes! Average Score: 220.44 Episode 541 Average Score: 220.27 Environment solved in 441 episodes! Average Score: 220.27 Episode 542 Average Score: 219.96 Environment solved in 442 episodes! Average Score: 219.96 Episode 543 Average Score: 220.16 Environment solved in 443 episodes! Average Score: 220.16 Episode 544 Average Score: 220.56 Environment solved in 444 episodes! Average Score: 220.56 Episode 545 Average Score: 220.29 Environment solved in 445 episodes! Average Score: 220.29 Episode 546 Average Score: 220.86 Environment solved in 446 episodes! Average Score: 220.86 Episode 547 Average Score: 220.91 Environment solved in 447 episodes! Average Score: 220.91 Episode 548 Average Score: 220.46 Environment solved in 448 episodes! Average Score: 220.46 Episode 549 Average Score: 220.68 Environment solved in 449 episodes! Average Score: 220.68 Episode 550 Average Score: 220.87 Environment solved in 450 episodes! Average Score: 220.87 Episode 551 Average Score: 220.68 Environment solved in 451 episodes! Average Score: 220.68 Episode 552 Average Score: 220.65 Environment solved in 452 episodes! Average Score: 220.65 Episode 553 Average Score: 220.58 Environment solved in 453 episodes! Average Score: 220.58 Episode 554 Average Score: 220.37 Environment solved in 454 episodes! Average Score: 220.37 Episode 555 Average Score: 218.94 Environment solved in 455 episodes! Average Score: 218.94 Episode 556 Average Score: 218.80 Environment solved in 456 episodes! Average Score: 218.80 Episode 557 Average Score: 218.75 Environment solved in 457 episodes! Average Score: 218.75 Episode 558 Average Score: 219.22 Environment solved in 458 episodes! Average Score: 219.22 Episode 559 Average Score: 219.50 Environment solved in 459 episodes! Average Score: 219.50 Episode 560 Average Score: 219.75 Environment solved in 460 episodes! Average Score: 219.75 Episode 561 Average Score: 219.44 Environment solved in 461 episodes! Average Score: 219.44 Episode 562 Average Score: 219.74 Environment solved in 462 episodes! Average Score: 219.74 Episode 563 Average Score: 219.75 Environment solved in 463 episodes! Average Score: 219.75 Episode 564 Average Score: 219.71 Environment solved in 464 episodes! Average Score: 219.71 Episode 565 Average Score: 221.19 Environment solved in 465 episodes! Average Score: 221.19 Episode 566 Average Score: 221.27 Environment solved in 466 episodes! Average Score: 221.27 Episode 567 Average Score: 221.40 Environment solved in 467 episodes! Average Score: 221.40 Episode 568 Average Score: 220.86 Environment solved in 468 episodes! Average Score: 220.86 Episode 569 Average Score: 221.12 Environment solved in 469 episodes! Average Score: 221.12 Episode 570 Average Score: 221.10 Environment solved in 470 episodes! Average Score: 221.10 Episode 571 Average Score: 221.23 Environment solved in 471 episodes! Average Score: 221.23 Episode 572 Average Score: 221.52 Environment solved in 472 episodes! Average Score: 221.52 Episode 573 Average Score: 221.83 Environment solved in 473 episodes! Average Score: 221.83 Episode 574 Average Score: 221.57 Environment solved in 474 episodes! Average Score: 221.57 Episode 575 Average Score: 221.48 Environment solved in 475 episodes! Average Score: 221.48 Episode 576 Average Score: 223.77 Environment solved in 476 episodes! Average Score: 223.77 Episode 577 Average Score: 224.05 Environment solved in 477 episodes! Average Score: 224.05 Episode 578 Average Score: 223.82 Environment solved in 478 episodes! Average Score: 223.82 Episode 579 Average Score: 224.86 Environment solved in 479 episodes! Average Score: 224.86 Episode 580 Average Score: 224.30 Environment solved in 480 episodes! Average Score: 224.30 Episode 581 Average Score: 224.33 Environment solved in 481 episodes! Average Score: 224.33 Episode 582 Average Score: 224.24 Environment solved in 482 episodes! Average Score: 224.24 Episode 583 Average Score: 224.13 Environment solved in 483 episodes! Average Score: 224.13 Episode 584 Average Score: 224.13 Environment solved in 484 episodes! Average Score: 224.13 Episode 585 Average Score: 224.72 Environment solved in 485 episodes! Average Score: 224.72 Episode 586 Average Score: 224.66 Environment solved in 486 episodes! Average Score: 224.66 Episode 587 Average Score: 224.68 Environment solved in 487 episodes! Average Score: 224.68 Episode 588 Average Score: 224.88 Environment solved in 488 episodes! Average Score: 224.88 Episode 589 Average Score: 224.79 Environment solved in 489 episodes! Average Score: 224.79 Episode 590 Average Score: 224.12 Environment solved in 490 episodes! Average Score: 224.12 Episode 591 Average Score: 224.25 Environment solved in 491 episodes! Average Score: 224.25 Episode 592 Average Score: 224.38 Environment solved in 492 episodes! Average Score: 224.38 Episode 593 Average Score: 224.20 Environment solved in 493 episodes! Average Score: 224.20 Episode 594 Average Score: 224.12 Environment solved in 494 episodes! Average Score: 224.12 Episode 595 Average Score: 223.74 Environment solved in 495 episodes! Average Score: 223.74 Episode 596 Average Score: 224.91 Environment solved in 496 episodes! Average Score: 224.91 Episode 597 Average Score: 225.06 Environment solved in 497 episodes! Average Score: 225.06 Episode 598 Average Score: 224.81 Environment solved in 498 episodes! Average Score: 224.81 Episode 599 Average Score: 224.59 Environment solved in 499 episodes! Average Score: 224.59 Episode 600 Average Score: 224.79 Environment solved in 500 episodes! Average Score: 224.79 Episode 601 Average Score: 224.99 Environment solved in 501 episodes! Average Score: 224.99 Episode 602 Average Score: 225.11 Environment solved in 502 episodes! Average Score: 225.11 Episode 603 Average Score: 225.01 Environment solved in 503 episodes! Average Score: 225.01 Episode 604 Average Score: 224.63 Environment solved in 504 episodes! Average Score: 224.63 Episode 605 Average Score: 224.58 Environment solved in 505 episodes! Average Score: 224.58 Episode 606 Average Score: 221.92 Environment solved in 506 episodes! Average Score: 221.92 Episode 607 Average Score: 221.80 Environment solved in 507 episodes! Average Score: 221.80 Episode 608 Average Score: 221.68 Environment solved in 508 episodes! Average Score: 221.68 Episode 609 Average Score: 221.63 Environment solved in 509 episodes! Average Score: 221.63 Episode 610 Average Score: 222.05 Environment solved in 510 episodes! Average Score: 222.05 Episode 611 Average Score: 221.33 Environment solved in 511 episodes! Average Score: 221.33 Episode 612 Average Score: 221.54 Environment solved in 512 episodes! Average Score: 221.54 Episode 613 Average Score: 221.63 Environment solved in 513 episodes! Average Score: 221.63 Episode 614 Average Score: 221.71 Environment solved in 514 episodes! Average Score: 221.71 Episode 615 Average Score: 222.08 Environment solved in 515 episodes! Average Score: 222.08 Episode 616 Average Score: 222.87 Environment solved in 516 episodes! Average Score: 222.87 Episode 617 Average Score: 223.26 Environment solved in 517 episodes! Average Score: 223.26 Episode 618 Average Score: 223.59 Environment solved in 518 episodes! Average Score: 223.59 Episode 619 Average Score: 223.85 Environment solved in 519 episodes! Average Score: 223.85 Episode 620 Average Score: 223.25 Environment solved in 520 episodes! Average Score: 223.25 Episode 621 Average Score: 222.83 Environment solved in 521 episodes! Average Score: 222.83 Episode 622 Average Score: 223.03 Environment solved in 522 episodes! Average Score: 223.03 Episode 623 Average Score: 223.54 Environment solved in 523 episodes! Average Score: 223.54 Episode 624 Average Score: 223.27 Environment solved in 524 episodes! Average Score: 223.27 Episode 625 Average Score: 223.38 Environment solved in 525 episodes! Average Score: 223.38 Episode 626 Average Score: 223.34 Environment solved in 526 episodes! Average Score: 223.34 Episode 627 Average Score: 230.37 Environment solved in 527 episodes! Average Score: 230.37 Episode 628 Average Score: 230.33 Environment solved in 528 episodes! Average Score: 230.33 Episode 629 Average Score: 230.39 Environment solved in 529 episodes! Average Score: 230.39 Episode 630 Average Score: 230.40 Environment solved in 530 episodes! Average Score: 230.40 Episode 631 Average Score: 230.45 Environment solved in 531 episodes! Average Score: 230.45 Episode 632 Average Score: 230.33 Environment solved in 532 episodes! Average Score: 230.33 Episode 633 Average Score: 229.18 Environment solved in 533 episodes! Average Score: 229.18 Episode 634 Average Score: 229.04 Environment solved in 534 episodes! Average Score: 229.04 Episode 635 Average Score: 229.18 Environment solved in 535 episodes! Average Score: 229.18 Episode 636 Average Score: 229.13 Environment solved in 536 episodes! Average Score: 229.13 Episode 637 Average Score: 229.51 Environment solved in 537 episodes! Average Score: 229.51 Episode 638 Average Score: 229.43 Environment solved in 538 episodes! Average Score: 229.43 Episode 639 Average Score: 229.67 Environment solved in 539 episodes! Average Score: 229.67 Episode 640 Average Score: 229.87 Environment solved in 540 episodes! Average Score: 229.87 Episode 641 Average Score: 230.30 Environment solved in 541 episodes! Average Score: 230.30 Episode 642 Average Score: 230.46 Environment solved in 542 episodes! Average Score: 230.46 Episode 643 Average Score: 230.18 Environment solved in 543 episodes! Average Score: 230.18 Episode 644 Average Score: 230.50 Environment solved in 544 episodes! Average Score: 230.50 Episode 645 Average Score: 230.60 Environment solved in 545 episodes! Average Score: 230.60 Episode 646 Average Score: 230.68 Environment solved in 546 episodes! Average Score: 230.68 Episode 647 Average Score: 230.66 Environment solved in 547 episodes! Average Score: 230.66 Episode 648 Average Score: 230.84 Environment solved in 548 episodes! Average Score: 230.84 Episode 649 Average Score: 230.76 Environment solved in 549 episodes! Average Score: 230.76 Episode 650 Average Score: 230.67 Environment solved in 550 episodes! Average Score: 230.67 Episode 651 Average Score: 231.10 Environment solved in 551 episodes! Average Score: 231.10 Episode 652 Average Score: 231.10 Environment solved in 552 episodes! Average Score: 231.10 Episode 653 Average Score: 231.33 Environment solved in 553 episodes! Average Score: 231.33 Episode 654 Average Score: 231.11 Environment solved in 554 episodes! Average Score: 231.11 Episode 655 Average Score: 232.51 Environment solved in 555 episodes! Average Score: 232.51 Episode 656 Average Score: 232.63 Environment solved in 556 episodes! Average Score: 232.63 Episode 657 Average Score: 232.56 Environment solved in 557 episodes! Average Score: 232.56 Episode 658 Average Score: 232.98 Environment solved in 558 episodes! Average Score: 232.98 Episode 659 Average Score: 232.63 Environment solved in 559 episodes! Average Score: 232.63 Episode 660 Average Score: 232.82 Environment solved in 560 episodes! Average Score: 232.82 Episode 661 Average Score: 233.07 Environment solved in 561 episodes! Average Score: 233.07 Episode 662 Average Score: 233.23 Environment solved in 562 episodes! Average Score: 233.23 Episode 663 Average Score: 233.92 Environment solved in 563 episodes! Average Score: 233.92 Episode 664 Average Score: 233.87 Environment solved in 564 episodes! Average Score: 233.87 Episode 665 Average Score: 234.20 Environment solved in 565 episodes! Average Score: 234.20 Episode 666 Average Score: 233.66 Environment solved in 566 episodes! Average Score: 233.66 Episode 667 Average Score: 233.91 Environment solved in 567 episodes! Average Score: 233.91 Episode 668 Average Score: 234.35 Environment solved in 568 episodes! Average Score: 234.35 Episode 669 Average Score: 234.51 Environment solved in 569 episodes! Average Score: 234.51 Episode 670 Average Score: 234.54 Environment solved in 570 episodes! Average Score: 234.54 Episode 671 Average Score: 234.59 Environment solved in 571 episodes! Average Score: 234.59 Episode 672 Average Score: 234.59 Environment solved in 572 episodes! Average Score: 234.59 Episode 673 Average Score: 234.56 Environment solved in 573 episodes! Average Score: 234.56 Episode 674 Average Score: 234.80 Environment solved in 574 episodes! Average Score: 234.80 Episode 675 Average Score: 234.84 Environment solved in 575 episodes! Average Score: 234.84 Episode 676 Average Score: 235.35 Environment solved in 576 episodes! Average Score: 235.35 Episode 677 Average Score: 235.48 Environment solved in 577 episodes! Average Score: 235.48 Episode 678 Average Score: 236.03 Environment solved in 578 episodes! Average Score: 236.03 Episode 679 Average Score: 236.00 Environment solved in 579 episodes! Average Score: 236.00 Episode 680 Average Score: 236.17 Environment solved in 580 episodes! Average Score: 236.17 Episode 681 Average Score: 231.39 Environment solved in 581 episodes! Average Score: 231.39 Episode 682 Average Score: 231.63 Environment solved in 582 episodes! Average Score: 231.63 Episode 683 Average Score: 231.84 Environment solved in 583 episodes! Average Score: 231.84 Episode 684 Average Score: 232.15 Environment solved in 584 episodes! Average Score: 232.15 Episode 685 Average Score: 231.68 Environment solved in 585 episodes! Average Score: 231.68 Episode 686 Average Score: 231.97 Environment solved in 586 episodes! Average Score: 231.97 Episode 687 Average Score: 231.83 Environment solved in 587 episodes! Average Score: 231.83 Episode 688 Average Score: 231.70 Environment solved in 588 episodes! Average Score: 231.70 Episode 689 Average Score: 232.18 Environment solved in 589 episodes! Average Score: 232.18 Episode 690 Average Score: 232.44 Environment solved in 590 episodes! Average Score: 232.44 Episode 691 Average Score: 232.89 Environment solved in 591 episodes! Average Score: 232.89 Episode 692 Average Score: 232.74 Environment solved in 592 episodes! Average Score: 232.74 Episode 693 Average Score: 232.84 Environment solved in 593 episodes! Average Score: 232.84 Episode 694 Average Score: 233.10 Environment solved in 594 episodes! Average Score: 233.10 Episode 695 Average Score: 233.53 Environment solved in 595 episodes! Average Score: 233.53 Episode 696 Average Score: 233.10 Environment solved in 596 episodes! Average Score: 233.10 Episode 697 Average Score: 229.57 Environment solved in 597 episodes! Average Score: 229.57 Episode 698 Average Score: 229.95 Environment solved in 598 episodes! Average Score: 229.95 Episode 699 Average Score: 230.32 Environment solved in 599 episodes! Average Score: 230.32 Episode 700 Average Score: 230.31 Environment solved in 600 episodes! Average Score: 230.31 Episode 701 Average Score: 230.54 Environment solved in 601 episodes! Average Score: 230.54 Episode 702 Average Score: 226.19 Environment solved in 602 episodes! Average Score: 226.19 Episode 703 Average Score: 226.58 Environment solved in 603 episodes! Average Score: 226.58 Episode 704 Average Score: 227.06 Environment solved in 604 episodes! Average Score: 227.06 Episode 705 Average Score: 227.26 Environment solved in 605 episodes! Average Score: 227.26 Episode 706 Average Score: 230.14 Environment solved in 606 episodes! Average Score: 230.14 Episode 707 Average Score: 230.03 Environment solved in 607 episodes! Average Score: 230.03 Episode 708 Average Score: 230.46 Environment solved in 608 episodes! Average Score: 230.46 Episode 709 Average Score: 230.53 Environment solved in 609 episodes! Average Score: 230.53 Episode 710 Average Score: 230.61 Environment solved in 610 episodes! Average Score: 230.61 Episode 711 Average Score: 231.67 Environment solved in 611 episodes! Average Score: 231.67 Episode 712 Average Score: 231.75 Environment solved in 612 episodes! Average Score: 231.75 Episode 713 Average Score: 231.72 Environment solved in 613 episodes! Average Score: 231.72 Episode 714 Average Score: 231.48 Environment solved in 614 episodes! Average Score: 231.48 Episode 715 Average Score: 231.65 Environment solved in 615 episodes! Average Score: 231.65 Episode 716 Average Score: 231.66 Environment solved in 616 episodes! Average Score: 231.66 Episode 717 Average Score: 231.67 Environment solved in 617 episodes! Average Score: 231.67 Episode 718 Average Score: 232.32 Environment solved in 618 episodes! Average Score: 232.32 Episode 719 Average Score: 232.44 Environment solved in 619 episodes! Average Score: 232.44 Episode 720 Average Score: 232.40 Environment solved in 620 episodes! Average Score: 232.40 Episode 721 Average Score: 232.79 Environment solved in 621 episodes! Average Score: 232.79 Episode 722 Average Score: 232.78 Environment solved in 622 episodes! Average Score: 232.78 Episode 723 Average Score: 232.45 Environment solved in 623 episodes! Average Score: 232.45 Episode 724 Average Score: 232.68 Environment solved in 624 episodes! Average Score: 232.68 Episode 725 Average Score: 232.72 Environment solved in 625 episodes! Average Score: 232.72 Episode 726 Average Score: 233.32 Environment solved in 626 episodes! Average Score: 233.32 Episode 727 Average Score: 233.63 Environment solved in 627 episodes! Average Score: 233.63 Episode 728 Average Score: 233.99 Environment solved in 628 episodes! Average Score: 233.99 Episode 729 Average Score: 233.83 Environment solved in 629 episodes! Average Score: 233.83 Episode 730 Average Score: 234.23 Environment solved in 630 episodes! Average Score: 234.23 Episode 731 Average Score: 234.04 Environment solved in 631 episodes! Average Score: 234.04 Episode 732 Average Score: 234.25 Environment solved in 632 episodes! Average Score: 234.25 Episode 733 Average Score: 235.11 Environment solved in 633 episodes! Average Score: 235.11 Episode 734 Average Score: 235.56 Environment solved in 634 episodes! Average Score: 235.56 Episode 735 Average Score: 235.72 Environment solved in 635 episodes! Average Score: 235.72 Episode 736 Average Score: 235.81 Environment solved in 636 episodes! Average Score: 235.81 Episode 737 Average Score: 235.81 Environment solved in 637 episodes! Average Score: 235.81 Episode 738 Average Score: 236.14 Environment solved in 638 episodes! Average Score: 236.14 Episode 739 Average Score: 236.16 Environment solved in 639 episodes! Average Score: 236.16 Episode 740 Average Score: 236.14 Environment solved in 640 episodes! Average Score: 236.14 Episode 741 Average Score: 235.85 Environment solved in 641 episodes! Average Score: 235.85 Episode 742 Average Score: 235.81 Environment solved in 642 episodes! Average Score: 235.81 Episode 743 Average Score: 236.31 Environment solved in 643 episodes! Average Score: 236.31 Episode 744 Average Score: 234.16 Environment solved in 644 episodes! Average Score: 234.16 Episode 745 Average Score: 234.34 Environment solved in 645 episodes! Average Score: 234.34 Episode 746 Average Score: 234.10 Environment solved in 646 episodes! Average Score: 234.10 Episode 747 Average Score: 234.77 Environment solved in 647 episodes! Average Score: 234.77 Episode 748 Average Score: 234.75 Environment solved in 648 episodes! Average Score: 234.75 Episode 749 Average Score: 234.97 Environment solved in 649 episodes! Average Score: 234.97 Episode 750 Average Score: 235.38 Environment solved in 650 episodes! Average Score: 235.38 Episode 751 Average Score: 234.84 Environment solved in 651 episodes! Average Score: 234.84 Episode 752 Average Score: 235.32 Environment solved in 652 episodes! Average Score: 235.32 Episode 753 Average Score: 235.20 Environment solved in 653 episodes! Average Score: 235.20 Episode 754 Average Score: 235.09 Environment solved in 654 episodes! Average Score: 235.09 Episode 755 Average Score: 235.33 Environment solved in 655 episodes! Average Score: 235.33 Episode 756 Average Score: 235.57 Environment solved in 656 episodes! Average Score: 235.57 Episode 757 Average Score: 235.73 Environment solved in 657 episodes! Average Score: 235.73 Episode 758 Average Score: 235.49 Environment solved in 658 episodes! Average Score: 235.49 Episode 759 Average Score: 235.53 Environment solved in 659 episodes! Average Score: 235.53 Episode 760 Average Score: 235.33 Environment solved in 660 episodes! Average Score: 235.33 Episode 761 Average Score: 235.13 Environment solved in 661 episodes! Average Score: 235.13 Episode 762 Average Score: 235.27 Environment solved in 662 episodes! Average Score: 235.27 Episode 763 Average Score: 234.94 Environment solved in 663 episodes! Average Score: 234.94 Episode 764 Average Score: 235.20 Environment solved in 664 episodes! Average Score: 235.20 Episode 765 Average Score: 235.34 Environment solved in 665 episodes! Average Score: 235.34 Episode 766 Average Score: 236.12 Environment solved in 666 episodes! Average Score: 236.12 Episode 767 Average Score: 236.14 Environment solved in 667 episodes! Average Score: 236.14 Episode 768 Average Score: 236.15 Environment solved in 668 episodes! Average Score: 236.15 Episode 769 Average Score: 236.07 Environment solved in 669 episodes! Average Score: 236.07 Episode 770 Average Score: 236.76 Environment solved in 670 episodes! Average Score: 236.76 Episode 771 Average Score: 237.16 Environment solved in 671 episodes! Average Score: 237.16 Episode 772 Average Score: 237.27 Environment solved in 672 episodes! Average Score: 237.27 Episode 773 Average Score: 237.33 Environment solved in 673 episodes! Average Score: 237.33 Episode 774 Average Score: 237.21 Environment solved in 674 episodes! Average Score: 237.21 Episode 775 Average Score: 237.65 Environment solved in 675 episodes! Average Score: 237.65 Episode 776 Average Score: 238.01 Environment solved in 676 episodes! Average Score: 238.01 Episode 777 Average Score: 237.78 Environment solved in 677 episodes! Average Score: 237.78 Episode 778 Average Score: 237.65 Environment solved in 678 episodes! Average Score: 237.65 Episode 779 Average Score: 237.82 Environment solved in 679 episodes! Average Score: 237.82 Episode 780 Average Score: 238.11 Environment solved in 680 episodes! Average Score: 238.11 Episode 781 Average Score: 242.84 Environment solved in 681 episodes! Average Score: 242.84 Episode 782 Average Score: 242.88 Environment solved in 682 episodes! Average Score: 242.88 Episode 783 Average Score: 243.02 Environment solved in 683 episodes! Average Score: 243.02 Episode 784 Average Score: 243.06 Environment solved in 684 episodes! Average Score: 243.06 Episode 785 Average Score: 243.71 Environment solved in 685 episodes! Average Score: 243.71 Episode 786 Average Score: 243.52 Environment solved in 686 episodes! Average Score: 243.52 Episode 787 Average Score: 244.12 Environment solved in 687 episodes! Average Score: 244.12 Episode 788 Average Score: 244.11 Environment solved in 688 episodes! Average Score: 244.11 Episode 789 Average Score: 243.89 Environment solved in 689 episodes! Average Score: 243.89 Episode 790 Average Score: 243.85 Environment solved in 690 episodes! Average Score: 243.85 Episode 791 Average Score: 244.04 Environment solved in 691 episodes! Average Score: 244.04 Episode 792 Average Score: 244.22 Environment solved in 692 episodes! Average Score: 244.22 Episode 793 Average Score: 244.35 Environment solved in 693 episodes! Average Score: 244.35 Episode 794 Average Score: 244.47 Environment solved in 694 episodes! Average Score: 244.47 Episode 795 Average Score: 244.25 Environment solved in 695 episodes! Average Score: 244.25 Episode 796 Average Score: 244.74 Environment solved in 696 episodes! Average Score: 244.74 Episode 797 Average Score: 248.39 Environment solved in 697 episodes! Average Score: 248.39 Episode 798 Average Score: 248.23 Environment solved in 698 episodes! Average Score: 248.23 Episode 799 Average Score: 248.29 Environment solved in 699 episodes! Average Score: 248.29 Episode 800 Average Score: 248.15 Environment solved in 700 episodes! Average Score: 248.15 Episode 801 Average Score: 248.11 Environment solved in 701 episodes! Average Score: 248.11 Episode 802 Average Score: 252.56 Environment solved in 702 episodes! Average Score: 252.56 Episode 803 Average Score: 252.25 Environment solved in 703 episodes! Average Score: 252.25 Episode 804 Average Score: 251.88 Environment solved in 704 episodes! Average Score: 251.88 Episode 805 Average Score: 251.96 Environment solved in 705 episodes! Average Score: 251.96 Episode 806 Average Score: 252.23 Environment solved in 706 episodes! Average Score: 252.23 Episode 807 Average Score: 252.49 Environment solved in 707 episodes! Average Score: 252.49 Episode 808 Average Score: 252.36 Environment solved in 708 episodes! Average Score: 252.36 Episode 809 Average Score: 252.47 Environment solved in 709 episodes! Average Score: 252.47 Episode 810 Average Score: 249.25 Environment solved in 710 episodes! Average Score: 249.25 Episode 811 Average Score: 248.98 Environment solved in 711 episodes! Average Score: 248.98 Episode 812 Average Score: 249.10 Environment solved in 712 episodes! Average Score: 249.10 Episode 813 Average Score: 249.16 Environment solved in 713 episodes! Average Score: 249.16 Episode 814 Average Score: 249.45 Environment solved in 714 episodes! Average Score: 249.45 Episode 815 Average Score: 249.46 Environment solved in 715 episodes! Average Score: 249.46 Episode 816 Average Score: 249.10 Environment solved in 716 episodes! Average Score: 249.10 Episode 817 Average Score: 248.75 Environment solved in 717 episodes! Average Score: 248.75 Episode 818 Average Score: 248.48 Environment solved in 718 episodes! Average Score: 248.48 Episode 819 Average Score: 248.55 Environment solved in 719 episodes! Average Score: 248.55 Episode 820 Average Score: 248.84 Environment solved in 720 episodes! Average Score: 248.84 Episode 821 Average Score: 248.89 Environment solved in 721 episodes! Average Score: 248.89 Episode 822 Average Score: 248.73 Environment solved in 722 episodes! Average Score: 248.73 Episode 823 Average Score: 248.84 Environment solved in 723 episodes! Average Score: 248.84 Episode 824 Average Score: 248.73 Environment solved in 724 episodes! Average Score: 248.73 Episode 825 Average Score: 248.57 Environment solved in 725 episodes! Average Score: 248.57 Episode 826 Average Score: 248.32 Environment solved in 726 episodes! Average Score: 248.32 Episode 827 Average Score: 248.25 Environment solved in 727 episodes! Average Score: 248.25 Episode 828 Average Score: 248.16 Environment solved in 728 episodes! Average Score: 248.16 Episode 829 Average Score: 248.42 Environment solved in 729 episodes! Average Score: 248.42 Episode 830 Average Score: 248.28 Environment solved in 730 episodes! Average Score: 248.28 Episode 831 Average Score: 248.61 Environment solved in 731 episodes! Average Score: 248.61 Episode 832 Average Score: 248.82 Environment solved in 732 episodes! Average Score: 248.82 Episode 833 Average Score: 248.98 Environment solved in 733 episodes! Average Score: 248.98 Episode 834 Average Score: 248.59 Environment solved in 734 episodes! Average Score: 248.59 Episode 835 Average Score: 248.69 Environment solved in 735 episodes! Average Score: 248.69 Episode 836 Average Score: 248.92 Environment solved in 736 episodes! Average Score: 248.92 Episode 837 Average Score: 248.83 Environment solved in 737 episodes! Average Score: 248.83 Episode 838 Average Score: 248.66 Environment solved in 738 episodes! Average Score: 248.66 Episode 839 Average Score: 248.68 Environment solved in 739 episodes! Average Score: 248.68 Episode 840 Average Score: 249.25 Environment solved in 740 episodes! Average Score: 249.25 Episode 841 Average Score: 249.32 Environment solved in 741 episodes! Average Score: 249.32 Episode 842 Average Score: 249.16 Environment solved in 742 episodes! Average Score: 249.16 Episode 843 Average Score: 249.05 Environment solved in 743 episodes! Average Score: 249.05 Episode 844 Average Score: 251.01 Environment solved in 744 episodes! Average Score: 251.01 Episode 845 Average Score: 251.05 Environment solved in 745 episodes! Average Score: 251.05 Episode 846 Average Score: 250.93 Environment solved in 746 episodes! Average Score: 250.93 Episode 847 Average Score: 250.61 Environment solved in 747 episodes! Average Score: 250.61 Episode 848 Average Score: 250.88 Environment solved in 748 episodes! Average Score: 250.88 Episode 849 Average Score: 250.80 Environment solved in 749 episodes! Average Score: 250.80 Episode 850 Average Score: 250.63 Environment solved in 750 episodes! Average Score: 250.63 Episode 851 Average Score: 250.79 Environment solved in 751 episodes! Average Score: 250.79 Episode 852 Average Score: 250.85 Environment solved in 752 episodes! Average Score: 250.85 Episode 853 Average Score: 250.87 Environment solved in 753 episodes! Average Score: 250.87 Episode 854 Average Score: 251.20 Environment solved in 754 episodes! Average Score: 251.20 Episode 855 Average Score: 251.33 Environment solved in 755 episodes! Average Score: 251.33 Episode 856 Average Score: 251.15 Environment solved in 756 episodes! Average Score: 251.15 Episode 857 Average Score: 251.39 Environment solved in 757 episodes! Average Score: 251.39 Episode 858 Average Score: 251.29 Environment solved in 758 episodes! Average Score: 251.29 Episode 859 Average Score: 251.72 Environment solved in 759 episodes! Average Score: 251.72 Episode 860 Average Score: 252.23 Environment solved in 760 episodes! Average Score: 252.23 Episode 861 Average Score: 247.31 Environment solved in 761 episodes! Average Score: 247.31 Episode 862 Average Score: 247.13 Environment solved in 762 episodes! Average Score: 247.13 Episode 863 Average Score: 247.28 Environment solved in 763 episodes! Average Score: 247.28 Episode 864 Average Score: 247.10 Environment solved in 764 episodes! Average Score: 247.10 Episode 865 Average Score: 246.84 Environment solved in 765 episodes! Average Score: 246.84 Episode 866 Average Score: 246.24 Environment solved in 766 episodes! Average Score: 246.24 Episode 867 Average Score: 246.28 Environment solved in 767 episodes! Average Score: 246.28 Episode 868 Average Score: 246.60 Environment solved in 768 episodes! Average Score: 246.60 Episode 869 Average Score: 246.59 Environment solved in 769 episodes! Average Score: 246.59 Episode 870 Average Score: 246.46 Environment solved in 770 episodes! Average Score: 246.46 Episode 871 Average Score: 246.47 Environment solved in 771 episodes! Average Score: 246.47 Episode 872 Average Score: 246.84 Environment solved in 772 episodes! Average Score: 246.84 Episode 873 Average Score: 246.90 Environment solved in 773 episodes! Average Score: 246.90 Episode 874 Average Score: 246.90 Environment solved in 774 episodes! Average Score: 246.90 Episode 875 Average Score: 246.79 Environment solved in 775 episodes! Average Score: 246.79 Episode 876 Average Score: 246.72 Environment solved in 776 episodes! Average Score: 246.72 Episode 877 Average Score: 246.98 Environment solved in 777 episodes! Average Score: 246.98 Episode 878 Average Score: 247.20 Environment solved in 778 episodes! Average Score: 247.20 Episode 879 Average Score: 246.73 Environment solved in 779 episodes! Average Score: 246.73 Episode 880 Average Score: 246.73 Environment solved in 780 episodes! Average Score: 246.73 Episode 881 Average Score: 246.81 Environment solved in 781 episodes! Average Score: 246.81 Episode 882 Average Score: 247.00 Environment solved in 782 episodes! Average Score: 247.00 Episode 883 Average Score: 246.59 Environment solved in 783 episodes! Average Score: 246.59 Episode 884 Average Score: 246.40 Environment solved in 784 episodes! Average Score: 246.40 Episode 885 Average Score: 246.48 Environment solved in 785 episodes! Average Score: 246.48 Episode 886 Average Score: 246.65 Environment solved in 786 episodes! Average Score: 246.65 Episode 887 Average Score: 246.27 Environment solved in 787 episodes! Average Score: 246.27 Episode 888 Average Score: 246.71 Environment solved in 788 episodes! Average Score: 246.71 Episode 889 Average Score: 246.78 Environment solved in 789 episodes! Average Score: 246.78 Episode 890 Average Score: 247.00 Environment solved in 790 episodes! Average Score: 247.00 Episode 891 Average Score: 246.67 Environment solved in 791 episodes! Average Score: 246.67 Episode 892 Average Score: 241.76 Environment solved in 792 episodes! Average Score: 241.76 Episode 893 Average Score: 241.85 Environment solved in 793 episodes! Average Score: 241.85 Episode 894 Average Score: 241.97 Environment solved in 794 episodes! Average Score: 241.97 Episode 895 Average Score: 242.39 Environment solved in 795 episodes! Average Score: 242.39 Episode 896 Average Score: 242.54 Environment solved in 796 episodes! Average Score: 242.54 Episode 897 Average Score: 242.77 Environment solved in 797 episodes! Average Score: 242.77 Episode 898 Average Score: 243.10 Environment solved in 798 episodes! Average Score: 243.10 Episode 899 Average Score: 243.58 Environment solved in 799 episodes! Average Score: 243.58 Episode 900 Average Score: 243.80 Environment solved in 800 episodes! Average Score: 243.80 Episode 901 Average Score: 243.37 Environment solved in 801 episodes! Average Score: 243.37 Episode 902 Average Score: 243.15 Environment solved in 802 episodes! Average Score: 243.15 Episode 903 Average Score: 243.17 Environment solved in 803 episodes! Average Score: 243.17 Episode 904 Average Score: 243.87 Environment solved in 804 episodes! Average Score: 243.87 Episode 905 Average Score: 243.67 Environment solved in 805 episodes! Average Score: 243.67 Episode 906 Average Score: 243.82 Environment solved in 806 episodes! Average Score: 243.82 Episode 907 Average Score: 243.85 Environment solved in 807 episodes! Average Score: 243.85 Episode 908 Average Score: 243.93 Environment solved in 808 episodes! Average Score: 243.93 Episode 909 Average Score: 243.69 Environment solved in 809 episodes! Average Score: 243.69 Episode 910 Average Score: 244.33 Environment solved in 810 episodes! Average Score: 244.33 Episode 911 Average Score: 244.62 Environment solved in 811 episodes! Average Score: 244.62 Episode 912 Average Score: 244.81 Environment solved in 812 episodes! Average Score: 244.81 Episode 913 Average Score: 244.80 Environment solved in 813 episodes! Average Score: 244.80 Episode 914 Average Score: 244.82 Environment solved in 814 episodes! Average Score: 244.82 Episode 915 Average Score: 244.94 Environment solved in 815 episodes! Average Score: 244.94 Episode 916 Average Score: 245.08 Environment solved in 816 episodes! Average Score: 245.08 Episode 917 Average Score: 245.10 Environment solved in 817 episodes! Average Score: 245.10 Episode 918 Average Score: 245.26 Environment solved in 818 episodes! Average Score: 245.26 Episode 919 Average Score: 245.38 Environment solved in 819 episodes! Average Score: 245.38 Episode 920 Average Score: 245.64 Environment solved in 820 episodes! Average Score: 245.64 Episode 921 Average Score: 245.55 Environment solved in 821 episodes! Average Score: 245.55 Episode 922 Average Score: 246.06 Environment solved in 822 episodes! Average Score: 246.06 Episode 923 Average Score: 246.18 Environment solved in 823 episodes! Average Score: 246.18 Episode 924 Average Score: 246.31 Environment solved in 824 episodes! Average Score: 246.31 Episode 925 Average Score: 246.73 Environment solved in 825 episodes! Average Score: 246.73 Episode 926 Average Score: 246.73 Environment solved in 826 episodes! Average Score: 246.73 Episode 927 Average Score: 244.36 Environment solved in 827 episodes! Average Score: 244.36 Episode 928 Average Score: 244.51 Environment solved in 828 episodes! Average Score: 244.51 Episode 929 Average Score: 244.75 Environment solved in 829 episodes! Average Score: 244.75 Episode 930 Average Score: 244.48 Environment solved in 830 episodes! Average Score: 244.48 Episode 931 Average Score: 244.41 Environment solved in 831 episodes! Average Score: 244.41 Episode 932 Average Score: 244.39 Environment solved in 832 episodes! Average Score: 244.39 Episode 933 Average Score: 244.44 Environment solved in 833 episodes! Average Score: 244.44 Episode 934 Average Score: 244.54 Environment solved in 834 episodes! Average Score: 244.54 Episode 935 Average Score: 244.87 Environment solved in 835 episodes! Average Score: 244.87 Episode 936 Average Score: 244.71 Environment solved in 836 episodes! Average Score: 244.71 Episode 937 Average Score: 244.65 Environment solved in 837 episodes! Average Score: 244.65 Episode 938 Average Score: 244.77 Environment solved in 838 episodes! Average Score: 244.77 Episode 939 Average Score: 244.44 Environment solved in 839 episodes! Average Score: 244.44 Episode 940 Average Score: 244.26 Environment solved in 840 episodes! Average Score: 244.26 Episode 941 Average Score: 244.25 Environment solved in 841 episodes! Average Score: 244.25 Episode 942 Average Score: 244.74 Environment solved in 842 episodes! Average Score: 244.74 Episode 943 Average Score: 244.89 Environment solved in 843 episodes! Average Score: 244.89 Episode 944 Average Score: 244.80 Environment solved in 844 episodes! Average Score: 244.80 Episode 945 Average Score: 244.94 Environment solved in 845 episodes! Average Score: 244.94 Episode 946 Average Score: 245.35 Environment solved in 846 episodes! Average Score: 245.35 Episode 947 Average Score: 245.41 Environment solved in 847 episodes! Average Score: 245.41 Episode 948 Average Score: 245.31 Environment solved in 848 episodes! Average Score: 245.31 Episode 949 Average Score: 245.38 Environment solved in 849 episodes! Average Score: 245.38 Episode 950 Average Score: 245.52 Environment solved in 850 episodes! Average Score: 245.52 Episode 951 Average Score: 245.59 Environment solved in 851 episodes! Average Score: 245.59 Episode 952 Average Score: 245.28 Environment solved in 852 episodes! Average Score: 245.28 Episode 953 Average Score: 245.44 Environment solved in 853 episodes! Average Score: 245.44 Episode 954 Average Score: 245.42 Environment solved in 854 episodes! Average Score: 245.42 Episode 955 Average Score: 245.70 Environment solved in 855 episodes! Average Score: 245.70 Episode 956 Average Score: 245.88 Environment solved in 856 episodes! Average Score: 245.88 Episode 957 Average Score: 245.90 Environment solved in 857 episodes! Average Score: 245.90 Episode 958 Average Score: 245.62 Environment solved in 858 episodes! Average Score: 245.62 Episode 959 Average Score: 245.38 Environment solved in 859 episodes! Average Score: 245.38 Episode 960 Average Score: 245.10 Environment solved in 860 episodes! Average Score: 245.10 Episode 961 Average Score: 250.41 Environment solved in 861 episodes! Average Score: 250.41 Episode 962 Average Score: 250.45 Environment solved in 862 episodes! Average Score: 250.45 Episode 963 Average Score: 250.57 Environment solved in 863 episodes! Average Score: 250.57 Episode 964 Average Score: 250.36 Environment solved in 864 episodes! Average Score: 250.36 Episode 965 Average Score: 250.26 Environment solved in 865 episodes! Average Score: 250.26 Episode 966 Average Score: 250.69 Environment solved in 866 episodes! Average Score: 250.69 Episode 967 Average Score: 250.71 Environment solved in 867 episodes! Average Score: 250.71 Episode 968 Average Score: 250.42 Environment solved in 868 episodes! Average Score: 250.42 Episode 969 Average Score: 250.31 Environment solved in 869 episodes! Average Score: 250.31 Episode 970 Average Score: 250.38 Environment solved in 870 episodes! Average Score: 250.38 Episode 971 Average Score: 250.20 Environment solved in 871 episodes! Average Score: 250.20 Episode 972 Average Score: 249.88 Environment solved in 872 episodes! Average Score: 249.88 Episode 973 Average Score: 249.69 Environment solved in 873 episodes! Average Score: 249.69 Episode 974 Average Score: 249.93 Environment solved in 874 episodes! Average Score: 249.93 Episode 975 Average Score: 250.12 Environment solved in 875 episodes! Average Score: 250.12 Episode 976 Average Score: 249.87 Environment solved in 876 episodes! Average Score: 249.87 Episode 977 Average Score: 250.03 Environment solved in 877 episodes! Average Score: 250.03 Episode 978 Average Score: 249.71 Environment solved in 878 episodes! Average Score: 249.71 Episode 979 Average Score: 250.24 Environment solved in 879 episodes! Average Score: 250.24 Episode 980 Average Score: 250.49 Environment solved in 880 episodes! Average Score: 250.49 Episode 981 Average Score: 250.32 Environment solved in 881 episodes! Average Score: 250.32 Episode 982 Average Score: 249.98 Environment solved in 882 episodes! Average Score: 249.98 Episode 983 Average Score: 250.08 Environment solved in 883 episodes! Average Score: 250.08 Episode 984 Average Score: 250.24 Environment solved in 884 episodes! Average Score: 250.24 Episode 985 Average Score: 249.94 Environment solved in 885 episodes! Average Score: 249.94 Episode 986 Average Score: 249.92 Environment solved in 886 episodes! Average Score: 249.92 Episode 987 Average Score: 250.37 Environment solved in 887 episodes! Average Score: 250.37 Episode 988 Average Score: 250.14 Environment solved in 888 episodes! Average Score: 250.14 Episode 989 Average Score: 250.36 Environment solved in 889 episodes! Average Score: 250.36 Episode 990 Average Score: 250.37 Environment solved in 890 episodes! Average Score: 250.37 Episode 991 Average Score: 250.49 Environment solved in 891 episodes! Average Score: 250.49 Episode 992 Average Score: 255.14 Environment solved in 892 episodes! Average Score: 255.14 Episode 993 Average Score: 255.16 Environment solved in 893 episodes! Average Score: 255.16 Episode 994 Average Score: 254.77 Environment solved in 894 episodes! Average Score: 254.77 Episode 995 Average Score: 254.77 Environment solved in 895 episodes! Average Score: 254.77 Episode 996 Average Score: 254.74 Environment solved in 896 episodes! Average Score: 254.74 Episode 997 Average Score: 254.80 Environment solved in 897 episodes! Average Score: 254.80 Episode 998 Average Score: 254.79 Environment solved in 898 episodes! Average Score: 254.79 Episode 999 Average Score: 254.13 Environment solved in 899 episodes! Average Score: 254.13 Episode 1000 Average Score: 253.97 Environment solved in 900 episodes! Average Score: 253.97 Episode 1001 Average Score: 254.05 Environment solved in 901 episodes! Average Score: 254.05 Episode 1002 Average Score: 254.11 Environment solved in 902 episodes! Average Score: 254.11 Episode 1003 Average Score: 254.06 Environment solved in 903 episodes! Average Score: 254.06 Episode 1004 Average Score: 253.77 Environment solved in 904 episodes! Average Score: 253.77 Episode 1005 Average Score: 254.02 Environment solved in 905 episodes! Average Score: 254.02 Episode 1006 Average Score: 253.89 Environment solved in 906 episodes! Average Score: 253.89 Episode 1007 Average Score: 254.21 Environment solved in 907 episodes! Average Score: 254.21 Episode 1008 Average Score: 254.06 Environment solved in 908 episodes! Average Score: 254.06 Episode 1009 Average Score: 254.09 Environment solved in 909 episodes! Average Score: 254.09 Episode 1010 Average Score: 256.45 Environment solved in 910 episodes! Average Score: 256.45 Episode 1011 Average Score: 255.71 Environment solved in 911 episodes! Average Score: 255.71 Episode 1012 Average Score: 255.28 Environment solved in 912 episodes! Average Score: 255.28 Episode 1013 Average Score: 255.40 Environment solved in 913 episodes! Average Score: 255.40 Episode 1014 Average Score: 255.40 Environment solved in 914 episodes! Average Score: 255.40 Episode 1015 Average Score: 255.23 Environment solved in 915 episodes! Average Score: 255.23 Episode 1016 Average Score: 255.27 Environment solved in 916 episodes! Average Score: 255.27 Episode 1017 Average Score: 255.84 Environment solved in 917 episodes! Average Score: 255.84 Episode 1018 Average Score: 255.64 Environment solved in 918 episodes! Average Score: 255.64 Episode 1019 Average Score: 255.55 Environment solved in 919 episodes! Average Score: 255.55 Episode 1020 Average Score: 254.91 Environment solved in 920 episodes! Average Score: 254.91 Episode 1021 Average Score: 255.16 Environment solved in 921 episodes! Average Score: 255.16 Episode 1022 Average Score: 254.75 Environment solved in 922 episodes! Average Score: 254.75 Episode 1023 Average Score: 254.41 Environment solved in 923 episodes! Average Score: 254.41 Episode 1024 Average Score: 254.05 Environment solved in 924 episodes! Average Score: 254.05 Episode 1025 Average Score: 253.90 Environment solved in 925 episodes! Average Score: 253.90 Episode 1026 Average Score: 254.04 Environment solved in 926 episodes! Average Score: 254.04 Episode 1027 Average Score: 256.17 Environment solved in 927 episodes! Average Score: 256.17 Episode 1028 Average Score: 255.98 Environment solved in 928 episodes! Average Score: 255.98 Episode 1029 Average Score: 255.99 Environment solved in 929 episodes! Average Score: 255.99 Episode 1030 Average Score: 256.52 Environment solved in 930 episodes! Average Score: 256.52 Episode 1031 Average Score: 256.49 Environment solved in 931 episodes! Average Score: 256.49 Episode 1032 Average Score: 256.34 Environment solved in 932 episodes! Average Score: 256.34 Episode 1033 Average Score: 256.01 Environment solved in 933 episodes! Average Score: 256.01 Episode 1034 Average Score: 256.31 Environment solved in 934 episodes! Average Score: 256.31 Episode 1035 Average Score: 255.80 Environment solved in 935 episodes! Average Score: 255.80 Episode 1036 Average Score: 255.78 Environment solved in 936 episodes! Average Score: 255.78 Episode 1037 Average Score: 255.58 Environment solved in 937 episodes! Average Score: 255.58 Episode 1038 Average Score: 255.50 Environment solved in 938 episodes! Average Score: 255.50 Episode 1039 Average Score: 255.59 Environment solved in 939 episodes! Average Score: 255.59 Episode 1040 Average Score: 255.30 Environment solved in 940 episodes! Average Score: 255.30 Episode 1041 Average Score: 255.26 Environment solved in 941 episodes! Average Score: 255.26 Episode 1042 Average Score: 255.47 Environment solved in 942 episodes! Average Score: 255.47 Episode 1043 Average Score: 255.19 Environment solved in 943 episodes! Average Score: 255.19 Episode 1044 Average Score: 255.75 Environment solved in 944 episodes! Average Score: 255.75 Episode 1045 Average Score: 255.46 Environment solved in 945 episodes! Average Score: 255.46 Episode 1046 Average Score: 255.76 Environment solved in 946 episodes! Average Score: 255.76 Episode 1047 Average Score: 255.83 Environment solved in 947 episodes! Average Score: 255.83 Episode 1048 Average Score: 255.83 Environment solved in 948 episodes! Average Score: 255.83 Episode 1049 Average Score: 255.88 Environment solved in 949 episodes! Average Score: 255.88 Episode 1050 Average Score: 256.04 Environment solved in 950 episodes! Average Score: 256.04 Episode 1051 Average Score: 255.81 Environment solved in 951 episodes! Average Score: 255.81 Episode 1052 Average Score: 255.78 Environment solved in 952 episodes! Average Score: 255.78 Episode 1053 Average Score: 255.58 Environment solved in 953 episodes! Average Score: 255.58 Episode 1054 Average Score: 255.90 Environment solved in 954 episodes! Average Score: 255.90 Episode 1055 Average Score: 255.54 Environment solved in 955 episodes! Average Score: 255.54 Episode 1056 Average Score: 252.63 Environment solved in 956 episodes! Average Score: 252.63 Episode 1057 Average Score: 252.64 Environment solved in 957 episodes! Average Score: 252.64 Episode 1058 Average Score: 252.79 Environment solved in 958 episodes! Average Score: 252.79 Episode 1059 Average Score: 252.76 Environment solved in 959 episodes! Average Score: 252.76 Episode 1060 Average Score: 252.70 Environment solved in 960 episodes! Average Score: 252.70 Episode 1061 Average Score: 252.28 Environment solved in 961 episodes! Average Score: 252.28 Episode 1062 Average Score: 252.45 Environment solved in 962 episodes! Average Score: 252.45 Episode 1063 Average Score: 252.14 Environment solved in 963 episodes! Average Score: 252.14 Episode 1064 Average Score: 252.11 Environment solved in 964 episodes! Average Score: 252.11 Episode 1065 Average Score: 251.99 Environment solved in 965 episodes! Average Score: 251.99 Episode 1066 Average Score: 252.01 Environment solved in 966 episodes! Average Score: 252.01 Episode 1067 Average Score: 251.78 Environment solved in 967 episodes! Average Score: 251.78 Episode 1068 Average Score: 251.79 Environment solved in 968 episodes! Average Score: 251.79 Episode 1069 Average Score: 251.61 Environment solved in 969 episodes! Average Score: 251.61 Episode 1070 Average Score: 251.07 Environment solved in 970 episodes! Average Score: 251.07 Episode 1071 Average Score: 250.79 Environment solved in 971 episodes! Average Score: 250.79 Episode 1072 Average Score: 250.76 Environment solved in 972 episodes! Average Score: 250.76 Episode 1073 Average Score: 251.12 Environment solved in 973 episodes! Average Score: 251.12 Episode 1074 Average Score: 251.24 Environment solved in 974 episodes! Average Score: 251.24 Episode 1075 Average Score: 250.87 Environment solved in 975 episodes! Average Score: 250.87 Episode 1076 Average Score: 250.78 Environment solved in 976 episodes! Average Score: 250.78 Episode 1077 Average Score: 250.25 Environment solved in 977 episodes! Average Score: 250.25 Episode 1078 Average Score: 247.97 Environment solved in 978 episodes! Average Score: 247.97 Episode 1079 Average Score: 247.33 Environment solved in 979 episodes! Average Score: 247.33 Episode 1080 Average Score: 247.07 Environment solved in 980 episodes! Average Score: 247.07 Episode 1081 Average Score: 247.14 Environment solved in 981 episodes! Average Score: 247.14 Episode 1082 Average Score: 247.72 Environment solved in 982 episodes! Average Score: 247.72 Episode 1083 Average Score: 247.86 Environment solved in 983 episodes! Average Score: 247.86 Episode 1084 Average Score: 247.72 Environment solved in 984 episodes! Average Score: 247.72 Episode 1085 Average Score: 247.74 Environment solved in 985 episodes! Average Score: 247.74 Episode 1086 Average Score: 247.78 Environment solved in 986 episodes! Average Score: 247.78 Episode 1087 Average Score: 247.48 Environment solved in 987 episodes! Average Score: 247.48 Episode 1088 Average Score: 247.33 Environment solved in 988 episodes! Average Score: 247.33 Episode 1089 Average Score: 246.98 Environment solved in 989 episodes! Average Score: 246.98 Episode 1090 Average Score: 246.51 Environment solved in 990 episodes! Average Score: 246.51 Episode 1091 Average Score: 246.89 Environment solved in 991 episodes! Average Score: 246.89 Episode 1092 Average Score: 247.15 Environment solved in 992 episodes! Average Score: 247.15 Episode 1093 Average Score: 244.16 Environment solved in 993 episodes! Average Score: 244.16 Episode 1094 Average Score: 244.33 Environment solved in 994 episodes! Average Score: 244.33 Episode 1095 Average Score: 243.97 Environment solved in 995 episodes! Average Score: 243.97 Episode 1096 Average Score: 243.43 Environment solved in 996 episodes! Average Score: 243.43 Episode 1097 Average Score: 243.60 Environment solved in 997 episodes! Average Score: 243.60 Episode 1098 Average Score: 243.58 Environment solved in 998 episodes! Average Score: 243.58 Episode 1099 Average Score: 243.77 Environment solved in 999 episodes! Average Score: 243.77 Episode 1100 Average Score: 244.23 Environment solved in 1000 episodes! Average Score: 244.23 Episode 1101 Average Score: 244.69 Environment solved in 1001 episodes! Average Score: 244.69 Episode 1102 Average Score: 244.97 Environment solved in 1002 episodes! Average Score: 244.97 Episode 1103 Average Score: 244.96 Environment solved in 1003 episodes! Average Score: 244.96 Episode 1104 Average Score: 245.03 Environment solved in 1004 episodes! Average Score: 245.03 Episode 1105 Average Score: 245.05 Environment solved in 1005 episodes! Average Score: 245.05 Episode 1106 Average Score: 244.94 Environment solved in 1006 episodes! Average Score: 244.94 Episode 1107 Average Score: 244.75 Environment solved in 1007 episodes! Average Score: 244.75 Episode 1108 Average Score: 245.02 Environment solved in 1008 episodes! Average Score: 245.02 Episode 1109 Average Score: 245.36 Environment solved in 1009 episodes! Average Score: 245.36 Episode 1110 Average Score: 245.05 Environment solved in 1010 episodes! Average Score: 245.05 Episode 1111 Average Score: 245.50 Environment solved in 1011 episodes! Average Score: 245.50 Episode 1112 Average Score: 245.57 Environment solved in 1012 episodes! Average Score: 245.57 Episode 1113 Average Score: 245.47 Environment solved in 1013 episodes! Average Score: 245.47 Episode 1114 Average Score: 245.62 Environment solved in 1014 episodes! Average Score: 245.62 Episode 1115 Average Score: 245.79 Environment solved in 1015 episodes! Average Score: 245.79 Episode 1116 Average Score: 245.96 Environment solved in 1016 episodes! Average Score: 245.96 Episode 1117 Average Score: 245.86 Environment solved in 1017 episodes! Average Score: 245.86 Episode 1118 Average Score: 246.09 Environment solved in 1018 episodes! Average Score: 246.09 Episode 1119 Average Score: 246.13 Environment solved in 1019 episodes! Average Score: 246.13 Episode 1120 Average Score: 246.42 Environment solved in 1020 episodes! Average Score: 246.42 Episode 1121 Average Score: 246.24 Environment solved in 1021 episodes! Average Score: 246.24 Episode 1122 Average Score: 246.12 Environment solved in 1022 episodes! Average Score: 246.12 Episode 1123 Average Score: 246.02 Environment solved in 1023 episodes! Average Score: 246.02 Episode 1124 Average Score: 246.45 Environment solved in 1024 episodes! Average Score: 246.45 Episode 1125 Average Score: 246.25 Environment solved in 1025 episodes! Average Score: 246.25 Episode 1126 Average Score: 246.36 Environment solved in 1026 episodes! Average Score: 246.36 Episode 1127 Average Score: 246.34 Environment solved in 1027 episodes! Average Score: 246.34 Episode 1128 Average Score: 246.06 Environment solved in 1028 episodes! Average Score: 246.06 Episode 1129 Average Score: 242.24 Environment solved in 1029 episodes! Average Score: 242.24 Episode 1130 Average Score: 238.43 Environment solved in 1030 episodes! Average Score: 238.43 Episode 1131 Average Score: 238.35 Environment solved in 1031 episodes! Average Score: 238.35 Episode 1132 Average Score: 234.70 Environment solved in 1032 episodes! Average Score: 234.70 Episode 1133 Average Score: 234.83 Environment solved in 1033 episodes! Average Score: 234.83 Episode 1134 Average Score: 234.46 Environment solved in 1034 episodes! Average Score: 234.46 Episode 1135 Average Score: 234.69 Environment solved in 1035 episodes! Average Score: 234.69 Episode 1136 Average Score: 234.99 Environment solved in 1036 episodes! Average Score: 234.99 Episode 1137 Average Score: 235.38 Environment solved in 1037 episodes! Average Score: 235.38 Episode 1138 Average Score: 235.36 Environment solved in 1038 episodes! Average Score: 235.36 Episode 1139 Average Score: 235.76 Environment solved in 1039 episodes! Average Score: 235.76 Episode 1140 Average Score: 233.81 Environment solved in 1040 episodes! Average Score: 233.81 Episode 1141 Average Score: 233.64 Environment solved in 1041 episodes! Average Score: 233.64 Episode 1142 Average Score: 233.44 Environment solved in 1042 episodes! Average Score: 233.44 Episode 1143 Average Score: 233.40 Environment solved in 1043 episodes! Average Score: 233.40 Episode 1144 Average Score: 233.02 Environment solved in 1044 episodes! Average Score: 233.02 Episode 1145 Average Score: 233.29 Environment solved in 1045 episodes! Average Score: 233.29 Episode 1146 Average Score: 233.12 Environment solved in 1046 episodes! Average Score: 233.12 Episode 1147 Average Score: 232.93 Environment solved in 1047 episodes! Average Score: 232.93 Episode 1148 Average Score: 232.49 Environment solved in 1048 episodes! Average Score: 232.49 Episode 1149 Average Score: 232.60 Environment solved in 1049 episodes! Average Score: 232.60 Episode 1150 Average Score: 232.21 Environment solved in 1050 episodes! Average Score: 232.21 Episode 1151 Average Score: 232.42 Environment solved in 1051 episodes! Average Score: 232.42 Episode 1152 Average Score: 232.11 Environment solved in 1052 episodes! Average Score: 232.11 Episode 1153 Average Score: 231.66 Environment solved in 1053 episodes! Average Score: 231.66 Episode 1154 Average Score: 231.73 Environment solved in 1054 episodes! Average Score: 231.73 Episode 1155 Average Score: 230.61 Environment solved in 1055 episodes! Average Score: 230.61 Episode 1156 Average Score: 233.23 Environment solved in 1056 episodes! Average Score: 233.23 Episode 1157 Average Score: 232.47 Environment solved in 1057 episodes! Average Score: 232.47 Episode 1158 Average Score: 232.92 Environment solved in 1058 episodes! Average Score: 232.92 Episode 1159 Average Score: 230.34 Environment solved in 1059 episodes! Average Score: 230.34 Episode 1160 Average Score: 230.79 Environment solved in 1060 episodes! Average Score: 230.79 Episode 1161 Average Score: 231.35 Environment solved in 1061 episodes! Average Score: 231.35 Episode 1162 Average Score: 231.30 Environment solved in 1062 episodes! Average Score: 231.30 Episode 1163 Average Score: 228.36 Environment solved in 1063 episodes! Average Score: 228.36 Episode 1164 Average Score: 228.74 Environment solved in 1064 episodes! Average Score: 228.74 Episode 1165 Average Score: 229.32 Environment solved in 1065 episodes! Average Score: 229.32 Episode 1166 Average Score: 229.35 Environment solved in 1066 episodes! Average Score: 229.35 Episode 1167 Average Score: 229.80 Environment solved in 1067 episodes! Average Score: 229.80 Episode 1168 Average Score: 229.77 Environment solved in 1068 episodes! Average Score: 229.77 Episode 1169 Average Score: 229.13 Environment solved in 1069 episodes! Average Score: 229.13 Episode 1170 Average Score: 229.31 Environment solved in 1070 episodes! Average Score: 229.31 Episode 1171 Average Score: 229.57 Environment solved in 1071 episodes! Average Score: 229.57 Episode 1172 Average Score: 229.89 Environment solved in 1072 episodes! Average Score: 229.89 Episode 1173 Average Score: 230.12 Environment solved in 1073 episodes! Average Score: 230.12 Episode 1174 Average Score: 230.30 Environment solved in 1074 episodes! Average Score: 230.30 Episode 1175 Average Score: 230.59 Environment solved in 1075 episodes! Average Score: 230.59 Episode 1176 Average Score: 230.62 Environment solved in 1076 episodes! Average Score: 230.62 Episode 1177 Average Score: 230.88 Environment solved in 1077 episodes! Average Score: 230.88 Episode 1178 Average Score: 233.57 Environment solved in 1078 episodes! Average Score: 233.57 Episode 1179 Average Score: 233.96 Environment solved in 1079 episodes! Average Score: 233.96 Episode 1180 Average Score: 233.70 Environment solved in 1080 episodes! Average Score: 233.70 Episode 1181 Average Score: 233.45 Environment solved in 1081 episodes! Average Score: 233.45 Episode 1182 Average Score: 233.29 Environment solved in 1082 episodes! Average Score: 233.29 Episode 1183 Average Score: 229.89 Environment solved in 1083 episodes! Average Score: 229.89 Episode 1184 Average Score: 229.92 Environment solved in 1084 episodes! Average Score: 229.92 Episode 1185 Average Score: 229.98 Environment solved in 1085 episodes! Average Score: 229.98 Episode 1186 Average Score: 230.10 Environment solved in 1086 episodes! Average Score: 230.10 Episode 1187 Average Score: 230.22 Environment solved in 1087 episodes! Average Score: 230.22 Episode 1188 Average Score: 227.64 Environment solved in 1088 episodes! Average Score: 227.64 Episode 1189 Average Score: 227.44 Environment solved in 1089 episodes! Average Score: 227.44 Episode 1190 Average Score: 227.65 Environment solved in 1090 episodes! Average Score: 227.65 Episode 1191 Average Score: 227.47 Environment solved in 1091 episodes! Average Score: 227.47 Episode 1192 Average Score: 227.18 Environment solved in 1092 episodes! Average Score: 227.18 Episode 1193 Average Score: 229.87 Environment solved in 1093 episodes! Average Score: 229.87 Episode 1194 Average Score: 229.91 Environment solved in 1094 episodes! Average Score: 229.91 Episode 1195 Average Score: 230.42 Environment solved in 1095 episodes! Average Score: 230.42 Episode 1196 Average Score: 230.99 Environment solved in 1096 episodes! Average Score: 230.99 Episode 1197 Average Score: 229.14 Environment solved in 1097 episodes! Average Score: 229.14 Episode 1198 Average Score: 229.12 Environment solved in 1098 episodes! Average Score: 229.12 Episode 1199 Average Score: 229.37 Environment solved in 1099 episodes! Average Score: 229.37 Episode 1200 Average Score: 228.91 Environment solved in 1100 episodes! Average Score: 228.91 Episode 1201 Average Score: 228.52 Environment solved in 1101 episodes! Average Score: 228.52 Episode 1202 Average Score: 228.30 Environment solved in 1102 episodes! Average Score: 228.30 Episode 1203 Average Score: 228.67 Environment solved in 1103 episodes! Average Score: 228.67 Episode 1204 Average Score: 228.51 Environment solved in 1104 episodes! Average Score: 228.51 Episode 1205 Average Score: 228.22 Environment solved in 1105 episodes! Average Score: 228.22 Episode 1206 Average Score: 228.39 Environment solved in 1106 episodes! Average Score: 228.39 Episode 1207 Average Score: 228.20 Environment solved in 1107 episodes! Average Score: 228.20 Episode 1208 Average Score: 228.20 Environment solved in 1108 episodes! Average Score: 228.20 Episode 1209 Average Score: 227.67 Environment solved in 1109 episodes! Average Score: 227.67 Episode 1210 Average Score: 228.11 Environment solved in 1110 episodes! Average Score: 228.11 Episode 1211 Average Score: 228.04 Environment solved in 1111 episodes! Average Score: 228.04 Episode 1212 Average Score: 228.02 Environment solved in 1112 episodes! Average Score: 228.02 Episode 1213 Average Score: 227.08 Environment solved in 1113 episodes! Average Score: 227.08 Episode 1214 Average Score: 226.99 Environment solved in 1114 episodes! Average Score: 226.99 Episode 1215 Average Score: 226.08 Environment solved in 1115 episodes! Average Score: 226.08 Episode 1216 Average Score: 226.11 Environment solved in 1116 episodes! Average Score: 226.11 Episode 1217 Average Score: 226.13 Environment solved in 1117 episodes! Average Score: 226.13 Episode 1218 Average Score: 226.09 Environment solved in 1118 episodes! Average Score: 226.09 Episode 1219 Average Score: 225.34 Environment solved in 1119 episodes! Average Score: 225.34 Episode 1220 Average Score: 223.01 Environment solved in 1120 episodes! Average Score: 223.01 Episode 1221 Average Score: 223.44 Environment solved in 1121 episodes! Average Score: 223.44 Episode 1222 Average Score: 223.79 Environment solved in 1122 episodes! Average Score: 223.79 Episode 1223 Average Score: 223.97 Environment solved in 1123 episodes! Average Score: 223.97 Episode 1224 Average Score: 223.74 Environment solved in 1124 episodes! Average Score: 223.74 Episode 1225 Average Score: 223.96 Environment solved in 1125 episodes! Average Score: 223.96 Episode 1226 Average Score: 223.94 Environment solved in 1126 episodes! Average Score: 223.94 Episode 1227 Average Score: 224.32 Environment solved in 1127 episodes! Average Score: 224.32 Episode 1228 Average Score: 224.78 Environment solved in 1128 episodes! Average Score: 224.78 Episode 1229 Average Score: 228.55 Environment solved in 1129 episodes! Average Score: 228.55 Episode 1230 Average Score: 232.40 Environment solved in 1130 episodes! Average Score: 232.40 Episode 1231 Average Score: 232.56 Environment solved in 1131 episodes! Average Score: 232.56 Episode 1232 Average Score: 236.10 Environment solved in 1132 episodes! Average Score: 236.10 Episode 1233 Average Score: 235.77 Environment solved in 1133 episodes! Average Score: 235.77 Episode 1234 Average Score: 235.82 Environment solved in 1134 episodes! Average Score: 235.82 Episode 1235 Average Score: 235.59 Environment solved in 1135 episodes! Average Score: 235.59 Episode 1236 Average Score: 235.54 Environment solved in 1136 episodes! Average Score: 235.54 Episode 1237 Average Score: 235.63 Environment solved in 1137 episodes! Average Score: 235.63 Episode 1238 Average Score: 235.27 Environment solved in 1138 episodes! Average Score: 235.27 Episode 1239 Average Score: 235.25 Environment solved in 1139 episodes! Average Score: 235.25 Episode 1240 Average Score: 237.51 Environment solved in 1140 episodes! Average Score: 237.51 Episode 1241 Average Score: 237.68 Environment solved in 1141 episodes! Average Score: 237.68 Episode 1242 Average Score: 237.65 Environment solved in 1142 episodes! Average Score: 237.65 Episode 1243 Average Score: 237.94 Environment solved in 1143 episodes! Average Score: 237.94 Episode 1244 Average Score: 238.01 Environment solved in 1144 episodes! Average Score: 238.01 Episode 1245 Average Score: 237.50 Environment solved in 1145 episodes! Average Score: 237.50 Episode 1246 Average Score: 236.99 Environment solved in 1146 episodes! Average Score: 236.99 Episode 1247 Average Score: 237.06 Environment solved in 1147 episodes! Average Score: 237.06 Episode 1248 Average Score: 237.16 Environment solved in 1148 episodes! Average Score: 237.16 Episode 1249 Average Score: 237.03 Environment solved in 1149 episodes! Average Score: 237.03 Episode 1250 Average Score: 237.46 Environment solved in 1150 episodes! Average Score: 237.46 Episode 1251 Average Score: 237.30 Environment solved in 1151 episodes! Average Score: 237.30 Episode 1252 Average Score: 237.81 Environment solved in 1152 episodes! Average Score: 237.81 Episode 1253 Average Score: 237.42 Environment solved in 1153 episodes! Average Score: 237.42 Episode 1254 Average Score: 237.17 Environment solved in 1154 episodes! Average Score: 237.17 Episode 1255 Average Score: 238.56 Environment solved in 1155 episodes! Average Score: 238.56 Episode 1256 Average Score: 238.73 Environment solved in 1156 episodes! Average Score: 238.73 Episode 1257 Average Score: 239.55 Environment solved in 1157 episodes! Average Score: 239.55 Episode 1258 Average Score: 239.46 Environment solved in 1158 episodes! Average Score: 239.46 Episode 1259 Average Score: 242.43 Environment solved in 1159 episodes! Average Score: 242.43 Episode 1260 Average Score: 242.41 Environment solved in 1160 episodes! Average Score: 242.41 Episode 1261 Average Score: 242.21 Environment solved in 1161 episodes! Average Score: 242.21 Episode 1262 Average Score: 242.07 Environment solved in 1162 episodes! Average Score: 242.07 Episode 1263 Average Score: 245.20 Environment solved in 1163 episodes! Average Score: 245.20 Episode 1264 Average Score: 244.86 Environment solved in 1164 episodes! Average Score: 244.86 Episode 1265 Average Score: 244.75 Environment solved in 1165 episodes! Average Score: 244.75 Episode 1266 Average Score: 244.39 Environment solved in 1166 episodes! Average Score: 244.39 Episode 1267 Average Score: 244.40 Environment solved in 1167 episodes! Average Score: 244.40 Episode 1268 Average Score: 243.72 Environment solved in 1168 episodes! Average Score: 243.72 Episode 1269 Average Score: 244.36 Environment solved in 1169 episodes! Average Score: 244.36 Episode 1270 Average Score: 244.48 Environment solved in 1170 episodes! Average Score: 244.48 Episode 1271 Average Score: 244.74 Environment solved in 1171 episodes! Average Score: 244.74 Episode 1272 Average Score: 244.16 Environment solved in 1172 episodes! Average Score: 244.16 Episode 1273 Average Score: 243.43 Environment solved in 1173 episodes! Average Score: 243.43 Episode 1274 Average Score: 243.26 Environment solved in 1174 episodes! Average Score: 243.26 Episode 1275 Average Score: 243.04 Environment solved in 1175 episodes! Average Score: 243.04 Episode 1276 Average Score: 242.17 Environment solved in 1176 episodes! Average Score: 242.17 Episode 1277 Average Score: 242.41 Environment solved in 1177 episodes! Average Score: 242.41 Episode 1278 Average Score: 242.10 Environment solved in 1178 episodes! Average Score: 242.10 Episode 1279 Average Score: 241.91 Environment solved in 1179 episodes! Average Score: 241.91 Episode 1280 Average Score: 242.20 Environment solved in 1180 episodes! Average Score: 242.20 Episode 1281 Average Score: 242.51 Environment solved in 1181 episodes! Average Score: 242.51 Episode 1282 Average Score: 242.55 Environment solved in 1182 episodes! Average Score: 242.55 Episode 1283 Average Score: 245.89 Environment solved in 1183 episodes! Average Score: 245.89 Episode 1284 Average Score: 245.81 Environment solved in 1184 episodes! Average Score: 245.81 Episode 1285 Average Score: 245.52 Environment solved in 1185 episodes! Average Score: 245.52 Episode 1286 Average Score: 244.06 Environment solved in 1186 episodes! Average Score: 244.06 Episode 1287 Average Score: 243.86 Environment solved in 1187 episodes! Average Score: 243.86 Episode 1288 Average Score: 246.42 Environment solved in 1188 episodes! Average Score: 246.42 Episode 1289 Average Score: 246.80 Environment solved in 1189 episodes! Average Score: 246.80 Episode 1290 Average Score: 247.32 Environment solved in 1190 episodes! Average Score: 247.32 Episode 1291 Average Score: 247.30 Environment solved in 1191 episodes! Average Score: 247.30 Episode 1292 Average Score: 247.59 Environment solved in 1192 episodes! Average Score: 247.59 Episode 1293 Average Score: 247.80 Environment solved in 1193 episodes! Average Score: 247.80 Episode 1294 Average Score: 247.88 Environment solved in 1194 episodes! Average Score: 247.88 Episode 1295 Average Score: 247.51 Environment solved in 1195 episodes! Average Score: 247.51 Episode 1296 Average Score: 247.05 Environment solved in 1196 episodes! Average Score: 247.05 Episode 1297 Average Score: 248.49 Environment solved in 1197 episodes! Average Score: 248.49 Episode 1298 Average Score: 248.41 Environment solved in 1198 episodes! Average Score: 248.41 Episode 1299 Average Score: 247.87 Environment solved in 1199 episodes! Average Score: 247.87 Episode 1300 Average Score: 247.88 Environment solved in 1200 episodes! Average Score: 247.88 Episode 1301 Average Score: 248.27 Environment solved in 1201 episodes! Average Score: 248.27 Episode 1302 Average Score: 248.34 Environment solved in 1202 episodes! Average Score: 248.34 Episode 1303 Average Score: 248.02 Environment solved in 1203 episodes! Average Score: 248.02 Episode 1304 Average Score: 248.24 Environment solved in 1204 episodes! Average Score: 248.24 Episode 1305 Average Score: 248.61 Environment solved in 1205 episodes! Average Score: 248.61 Episode 1306 Average Score: 248.58 Environment solved in 1206 episodes! Average Score: 248.58 Episode 1307 Average Score: 248.73 Environment solved in 1207 episodes! Average Score: 248.73 Episode 1308 Average Score: 247.13 Environment solved in 1208 episodes! Average Score: 247.13 Episode 1309 Average Score: 247.56 Environment solved in 1209 episodes! Average Score: 247.56 Episode 1310 Average Score: 247.62 Environment solved in 1210 episodes! Average Score: 247.62 Episode 1311 Average Score: 247.70 Environment solved in 1211 episodes! Average Score: 247.70 Episode 1312 Average Score: 247.55 Environment solved in 1212 episodes! Average Score: 247.55 Episode 1313 Average Score: 248.61 Environment solved in 1213 episodes! Average Score: 248.61 Episode 1314 Average Score: 248.70 Environment solved in 1214 episodes! Average Score: 248.70 Episode 1315 Average Score: 249.32 Environment solved in 1215 episodes! Average Score: 249.32 Episode 1316 Average Score: 249.45 Environment solved in 1216 episodes! Average Score: 249.45 Episode 1317 Average Score: 249.32 Environment solved in 1217 episodes! Average Score: 249.32 Episode 1318 Average Score: 249.22 Environment solved in 1218 episodes! Average Score: 249.22 Episode 1319 Average Score: 249.58 Environment solved in 1219 episodes! Average Score: 249.58 Episode 1320 Average Score: 252.23 Environment solved in 1220 episodes! Average Score: 252.23 Episode 1321 Average Score: 251.89 Environment solved in 1221 episodes! Average Score: 251.89 Episode 1322 Average Score: 251.91 Environment solved in 1222 episodes! Average Score: 251.91 Episode 1323 Average Score: 252.05 Environment solved in 1223 episodes! Average Score: 252.05 Episode 1324 Average Score: 252.36 Environment solved in 1224 episodes! Average Score: 252.36 Episode 1325 Average Score: 252.21 Environment solved in 1225 episodes! Average Score: 252.21 Episode 1326 Average Score: 251.93 Environment solved in 1226 episodes! Average Score: 251.93 Episode 1327 Average Score: 251.43 Environment solved in 1227 episodes! Average Score: 251.43 Episode 1328 Average Score: 251.20 Environment solved in 1228 episodes! Average Score: 251.20 Episode 1329 Average Score: 250.94 Environment solved in 1229 episodes! Average Score: 250.94 Episode 1330 Average Score: 250.62 Environment solved in 1230 episodes! Average Score: 250.62 Episode 1331 Average Score: 250.32 Environment solved in 1231 episodes! Average Score: 250.32 Episode 1332 Average Score: 250.54 Environment solved in 1232 episodes! Average Score: 250.54 Episode 1333 Average Score: 250.33 Environment solved in 1233 episodes! Average Score: 250.33 Episode 1334 Average Score: 250.19 Environment solved in 1234 episodes! Average Score: 250.19 Episode 1335 Average Score: 250.69 Environment solved in 1235 episodes! Average Score: 250.69 Episode 1336 Average Score: 250.04 Environment solved in 1236 episodes! Average Score: 250.04 Episode 1337 Average Score: 249.83 Environment solved in 1237 episodes! Average Score: 249.83 Episode 1338 Average Score: 250.49 Environment solved in 1238 episodes! Average Score: 250.49 Episode 1339 Average Score: 250.39 Environment solved in 1239 episodes! Average Score: 250.39 Episode 1340 Average Score: 250.30 Environment solved in 1240 episodes! Average Score: 250.30 Episode 1341 Average Score: 250.32 Environment solved in 1241 episodes! Average Score: 250.32 Episode 1342 Average Score: 250.38 Environment solved in 1242 episodes! Average Score: 250.38 Episode 1343 Average Score: 250.12 Environment solved in 1243 episodes! Average Score: 250.12 Episode 1344 Average Score: 250.30 Environment solved in 1244 episodes! Average Score: 250.30 Episode 1345 Average Score: 250.58 Environment solved in 1245 episodes! Average Score: 250.58 Episode 1346 Average Score: 250.89 Environment solved in 1246 episodes! Average Score: 250.89 Episode 1347 Average Score: 250.68 Environment solved in 1247 episodes! Average Score: 250.68 Episode 1348 Average Score: 250.87 Environment solved in 1248 episodes! Average Score: 250.87 Episode 1349 Average Score: 251.07 Environment solved in 1249 episodes! Average Score: 251.07 Episode 1350 Average Score: 250.90 Environment solved in 1250 episodes! Average Score: 250.90 Episode 1351 Average Score: 251.25 Environment solved in 1251 episodes! Average Score: 251.25 Episode 1352 Average Score: 251.28 Environment solved in 1252 episodes! Average Score: 251.28 Episode 1353 Average Score: 252.46 Environment solved in 1253 episodes! Average Score: 252.46 Episode 1354 Average Score: 252.41 Environment solved in 1254 episodes! Average Score: 252.41 Episode 1355 Average Score: 252.32 Environment solved in 1255 episodes! Average Score: 252.32 Episode 1356 Average Score: 252.31 Environment solved in 1256 episodes! Average Score: 252.31 Episode 1357 Average Score: 252.08 Environment solved in 1257 episodes! Average Score: 252.08 Episode 1358 Average Score: 252.12 Environment solved in 1258 episodes! Average Score: 252.12 Episode 1359 Average Score: 252.08 Environment solved in 1259 episodes! Average Score: 252.08 Episode 1360 Average Score: 252.00 Environment solved in 1260 episodes! Average Score: 252.00 Episode 1361 Average Score: 251.93 Environment solved in 1261 episodes! Average Score: 251.93 Episode 1362 Average Score: 251.83 Environment solved in 1262 episodes! Average Score: 251.83 Episode 1363 Average Score: 251.73 Environment solved in 1263 episodes! Average Score: 251.73 Episode 1364 Average Score: 252.22 Environment solved in 1264 episodes! Average Score: 252.22 Episode 1365 Average Score: 251.84 Environment solved in 1265 episodes! Average Score: 251.84 Episode 1366 Average Score: 252.34 Environment solved in 1266 episodes! Average Score: 252.34 Episode 1367 Average Score: 252.28 Environment solved in 1267 episodes! Average Score: 252.28 Episode 1368 Average Score: 252.69 Environment solved in 1268 episodes! Average Score: 252.69 Episode 1369 Average Score: 253.12 Environment solved in 1269 episodes! Average Score: 253.12 Episode 1370 Average Score: 253.00 Environment solved in 1270 episodes! Average Score: 253.00 Episode 1371 Average Score: 252.88 Environment solved in 1271 episodes! Average Score: 252.88 Episode 1372 Average Score: 253.08 Environment solved in 1272 episodes! Average Score: 253.08 Episode 1373 Average Score: 253.04 Environment solved in 1273 episodes! Average Score: 253.04 Episode 1374 Average Score: 253.08 Environment solved in 1274 episodes! Average Score: 253.08 Episode 1375 Average Score: 253.06 Environment solved in 1275 episodes! Average Score: 253.06 Episode 1376 Average Score: 254.33 Environment solved in 1276 episodes! Average Score: 254.33 Episode 1377 Average Score: 254.04 Environment solved in 1277 episodes! Average Score: 254.04 Episode 1378 Average Score: 254.21 Environment solved in 1278 episodes! Average Score: 254.21 Episode 1379 Average Score: 254.53 Environment solved in 1279 episodes! Average Score: 254.53 Episode 1380 Average Score: 254.60 Environment solved in 1280 episodes! Average Score: 254.60 Episode 1381 Average Score: 254.34 Environment solved in 1281 episodes! Average Score: 254.34 Episode 1382 Average Score: 253.97 Environment solved in 1282 episodes! Average Score: 253.97 Episode 1383 Average Score: 254.00 Environment solved in 1283 episodes! Average Score: 254.00 Episode 1384 Average Score: 254.05 Environment solved in 1284 episodes! Average Score: 254.05 Episode 1385 Average Score: 254.09 Environment solved in 1285 episodes! Average Score: 254.09 Episode 1386 Average Score: 255.32 Environment solved in 1286 episodes! Average Score: 255.32 Episode 1387 Average Score: 255.36 Environment solved in 1287 episodes! Average Score: 255.36 Episode 1388 Average Score: 255.49 Environment solved in 1288 episodes! Average Score: 255.49 Episode 1389 Average Score: 255.44 Environment solved in 1289 episodes! Average Score: 255.44 Episode 1390 Average Score: 254.23 Environment solved in 1290 episodes! Average Score: 254.23 Episode 1391 Average Score: 254.10 Environment solved in 1291 episodes! Average Score: 254.10 Episode 1392 Average Score: 253.74 Environment solved in 1292 episodes! Average Score: 253.74 Episode 1393 Average Score: 253.25 Environment solved in 1293 episodes! Average Score: 253.25 Episode 1394 Average Score: 253.32 Environment solved in 1294 episodes! Average Score: 253.32 Episode 1395 Average Score: 253.63 Environment solved in 1295 episodes! Average Score: 253.63 Episode 1396 Average Score: 253.86 Environment solved in 1296 episodes! Average Score: 253.86 Episode 1397 Average Score: 254.06 Environment solved in 1297 episodes! Average Score: 254.06 Episode 1398 Average Score: 254.24 Environment solved in 1298 episodes! Average Score: 254.24 Episode 1399 Average Score: 254.64 Environment solved in 1299 episodes! Average Score: 254.64 Episode 1400 Average Score: 254.79 Environment solved in 1300 episodes! Average Score: 254.79 Episode 1401 Average Score: 254.65 Environment solved in 1301 episodes! Average Score: 254.65 Episode 1402 Average Score: 254.74 Environment solved in 1302 episodes! Average Score: 254.74 Episode 1403 Average Score: 254.80 Environment solved in 1303 episodes! Average Score: 254.80 Episode 1404 Average Score: 254.67 Environment solved in 1304 episodes! Average Score: 254.67 Episode 1405 Average Score: 254.43 Environment solved in 1305 episodes! Average Score: 254.43 Episode 1406 Average Score: 251.98 Environment solved in 1306 episodes! Average Score: 251.98 Episode 1407 Average Score: 252.05 Environment solved in 1307 episodes! Average Score: 252.05 Episode 1408 Average Score: 252.87 Environment solved in 1308 episodes! Average Score: 252.87 Episode 1409 Average Score: 252.80 Environment solved in 1309 episodes! Average Score: 252.80 Episode 1410 Average Score: 252.96 Environment solved in 1310 episodes! Average Score: 252.96 Episode 1411 Average Score: 252.82 Environment solved in 1311 episodes! Average Score: 252.82 Episode 1412 Average Score: 253.21 Environment solved in 1312 episodes! Average Score: 253.21 Episode 1413 Average Score: 252.70 Environment solved in 1313 episodes! Average Score: 252.70 Episode 1414 Average Score: 251.96 Environment solved in 1314 episodes! Average Score: 251.96 Episode 1415 Average Score: 251.99 Environment solved in 1315 episodes! Average Score: 251.99 Episode 1416 Average Score: 251.98 Environment solved in 1316 episodes! Average Score: 251.98 Episode 1417 Average Score: 252.14 Environment solved in 1317 episodes! Average Score: 252.14 Episode 1418 Average Score: 252.31 Environment solved in 1318 episodes! Average Score: 252.31 Episode 1419 Average Score: 252.59 Environment solved in 1319 episodes! Average Score: 252.59 Episode 1420 Average Score: 252.30 Environment solved in 1320 episodes! Average Score: 252.30 Episode 1421 Average Score: 252.39 Environment solved in 1321 episodes! Average Score: 252.39 Episode 1422 Average Score: 252.28 Environment solved in 1322 episodes! Average Score: 252.28 Episode 1423 Average Score: 252.05 Environment solved in 1323 episodes! Average Score: 252.05 Episode 1424 Average Score: 249.46 Environment solved in 1324 episodes! Average Score: 249.46 Episode 1425 Average Score: 249.80 Environment solved in 1325 episodes! Average Score: 249.80 Episode 1426 Average Score: 249.70 Environment solved in 1326 episodes! Average Score: 249.70 Episode 1427 Average Score: 249.99 Environment solved in 1327 episodes! Average Score: 249.99 Episode 1428 Average Score: 249.98 Environment solved in 1328 episodes! Average Score: 249.98 Episode 1429 Average Score: 250.03 Environment solved in 1329 episodes! Average Score: 250.03 Episode 1430 Average Score: 250.20 Environment solved in 1330 episodes! Average Score: 250.20 Episode 1431 Average Score: 250.15 Environment solved in 1331 episodes! Average Score: 250.15 Episode 1432 Average Score: 249.87 Environment solved in 1332 episodes! Average Score: 249.87 Episode 1433 Average Score: 250.54 Environment solved in 1333 episodes! Average Score: 250.54 Episode 1434 Average Score: 250.63 Environment solved in 1334 episodes! Average Score: 250.63 Episode 1435 Average Score: 250.41 Environment solved in 1335 episodes! Average Score: 250.41 Episode 1436 Average Score: 250.83 Environment solved in 1336 episodes! Average Score: 250.83 Episode 1437 Average Score: 250.51 Environment solved in 1337 episodes! Average Score: 250.51 Episode 1438 Average Score: 250.23 Environment solved in 1338 episodes! Average Score: 250.23 Episode 1439 Average Score: 249.97 Environment solved in 1339 episodes! Average Score: 249.97 Episode 1440 Average Score: 249.82 Environment solved in 1340 episodes! Average Score: 249.82 Episode 1441 Average Score: 250.10 Environment solved in 1341 episodes! Average Score: 250.10 Episode 1442 Average Score: 249.95 Environment solved in 1342 episodes! Average Score: 249.95 Episode 1443 Average Score: 247.38 Environment solved in 1343 episodes! Average Score: 247.38 Episode 1444 Average Score: 247.01 Environment solved in 1344 episodes! Average Score: 247.01 Episode 1445 Average Score: 247.21 Environment solved in 1345 episodes! Average Score: 247.21 Episode 1446 Average Score: 247.42 Environment solved in 1346 episodes! Average Score: 247.42 Episode 1447 Average Score: 247.83 Environment solved in 1347 episodes! Average Score: 247.83 Episode 1448 Average Score: 247.82 Environment solved in 1348 episodes! Average Score: 247.82 Episode 1449 Average Score: 247.79 Environment solved in 1349 episodes! Average Score: 247.79 Episode 1450 Average Score: 247.46 Environment solved in 1350 episodes! Average Score: 247.46 Episode 1451 Average Score: 247.61 Environment solved in 1351 episodes! Average Score: 247.61 Episode 1452 Average Score: 247.65 Environment solved in 1352 episodes! Average Score: 247.65 Episode 1453 Average Score: 247.13 Environment solved in 1353 episodes! Average Score: 247.13 Episode 1454 Average Score: 247.50 Environment solved in 1354 episodes! Average Score: 247.50 Episode 1455 Average Score: 247.73 Environment solved in 1355 episodes! Average Score: 247.73 Episode 1456 Average Score: 247.84 Environment solved in 1356 episodes! Average Score: 247.84 Episode 1457 Average Score: 247.86 Environment solved in 1357 episodes! Average Score: 247.86 Episode 1458 Average Score: 247.66 Environment solved in 1358 episodes! Average Score: 247.66 Episode 1459 Average Score: 247.46 Environment solved in 1359 episodes! Average Score: 247.46 Episode 1460 Average Score: 247.35 Environment solved in 1360 episodes! Average Score: 247.35 Episode 1461 Average Score: 247.40 Environment solved in 1361 episodes! Average Score: 247.40 Episode 1462 Average Score: 247.25 Environment solved in 1362 episodes! Average Score: 247.25 Episode 1463 Average Score: 247.17 Environment solved in 1363 episodes! Average Score: 247.17 Episode 1464 Average Score: 247.04 Environment solved in 1364 episodes! Average Score: 247.04 Episode 1465 Average Score: 247.02 Environment solved in 1365 episodes! Average Score: 247.02 Episode 1466 Average Score: 246.76 Environment solved in 1366 episodes! Average Score: 246.76 Episode 1467 Average Score: 246.79 Environment solved in 1367 episodes! Average Score: 246.79 Episode 1468 Average Score: 247.02 Environment solved in 1368 episodes! Average Score: 247.02 Episode 1469 Average Score: 246.71 Environment solved in 1369 episodes! Average Score: 246.71 Episode 1470 Average Score: 246.31 Environment solved in 1370 episodes! Average Score: 246.31 Episode 1471 Average Score: 246.36 Environment solved in 1371 episodes! Average Score: 246.36 Episode 1472 Average Score: 246.62 Environment solved in 1372 episodes! Average Score: 246.62 Episode 1473 Average Score: 247.07 Environment solved in 1373 episodes! Average Score: 247.07 Episode 1474 Average Score: 247.25 Environment solved in 1374 episodes! Average Score: 247.25 Episode 1475 Average Score: 247.54 Environment solved in 1375 episodes! Average Score: 247.54 Episode 1476 Average Score: 247.65 Environment solved in 1376 episodes! Average Score: 247.65 Episode 1477 Average Score: 245.51 Environment solved in 1377 episodes! Average Score: 245.51 Episode 1478 Average Score: 245.60 Environment solved in 1378 episodes! Average Score: 245.60 Episode 1479 Average Score: 245.53 Environment solved in 1379 episodes! Average Score: 245.53 Episode 1480 Average Score: 245.65 Environment solved in 1380 episodes! Average Score: 245.65 Episode 1481 Average Score: 245.81 Environment solved in 1381 episodes! Average Score: 245.81 Episode 1482 Average Score: 245.91 Environment solved in 1382 episodes! Average Score: 245.91 Episode 1483 Average Score: 245.87 Environment solved in 1383 episodes! Average Score: 245.87 Episode 1484 Average Score: 245.84 Environment solved in 1384 episodes! Average Score: 245.84 Episode 1485 Average Score: 246.09 Environment solved in 1385 episodes! Average Score: 246.09 Episode 1486 Average Score: 245.96 Environment solved in 1386 episodes! Average Score: 245.96 Episode 1487 Average Score: 245.83 Environment solved in 1387 episodes! Average Score: 245.83 Episode 1488 Average Score: 245.47 Environment solved in 1388 episodes! Average Score: 245.47 Episode 1489 Average Score: 245.04 Environment solved in 1389 episodes! Average Score: 245.04 Episode 1490 Average Score: 245.96 Environment solved in 1390 episodes! Average Score: 245.96 Episode 1491 Average Score: 245.64 Environment solved in 1391 episodes! Average Score: 245.64 Episode 1492 Average Score: 245.82 Environment solved in 1392 episodes! Average Score: 245.82 Episode 1493 Average Score: 246.25 Environment solved in 1393 episodes! Average Score: 246.25 Episode 1494 Average Score: 245.94 Environment solved in 1394 episodes! Average Score: 245.94 Episode 1495 Average Score: 245.55 Environment solved in 1395 episodes! Average Score: 245.55 Episode 1496 Average Score: 245.59 Environment solved in 1396 episodes! Average Score: 245.59 Episode 1497 Average Score: 245.70 Environment solved in 1397 episodes! Average Score: 245.70 Episode 1498 Average Score: 245.38 Environment solved in 1398 episodes! Average Score: 245.38 Episode 1499 Average Score: 245.49 Environment solved in 1399 episodes! Average Score: 245.49 Episode 1500 Average Score: 245.35 Environment solved in 1400 episodes! Average Score: 245.35 Episode 1501 Average Score: 244.91 Environment solved in 1401 episodes! Average Score: 244.91 Episode 1502 Average Score: 244.53 Environment solved in 1402 episodes! Average Score: 244.53 Episode 1503 Average Score: 244.99 Environment solved in 1403 episodes! Average Score: 244.99 Episode 1504 Average Score: 245.11 Environment solved in 1404 episodes! Average Score: 245.11 Episode 1505 Average Score: 245.46 Environment solved in 1405 episodes! Average Score: 245.46 Episode 1506 Average Score: 247.35 Environment solved in 1406 episodes! Average Score: 247.35 Episode 1507 Average Score: 246.90 Environment solved in 1407 episodes! Average Score: 246.90 Episode 1508 Average Score: 247.49 Environment solved in 1408 episodes! Average Score: 247.49 Episode 1509 Average Score: 247.29 Environment solved in 1409 episodes! Average Score: 247.29 Episode 1510 Average Score: 247.05 Environment solved in 1410 episodes! Average Score: 247.05 Episode 1511 Average Score: 246.99 Environment solved in 1411 episodes! Average Score: 246.99 Episode 1512 Average Score: 246.87 Environment solved in 1412 episodes! Average Score: 246.87 Episode 1513 Average Score: 247.00 Environment solved in 1413 episodes! Average Score: 247.00 Episode 1514 Average Score: 246.89 Environment solved in 1414 episodes! Average Score: 246.89 Episode 1515 Average Score: 246.93 Environment solved in 1415 episodes! Average Score: 246.93 Episode 1516 Average Score: 246.89 Environment solved in 1416 episodes! Average Score: 246.89 Episode 1517 Average Score: 246.32 Environment solved in 1417 episodes! Average Score: 246.32 Episode 1518 Average Score: 246.15 Environment solved in 1418 episodes! Average Score: 246.15 Episode 1519 Average Score: 246.11 Environment solved in 1419 episodes! Average Score: 246.11 Episode 1520 Average Score: 246.21 Environment solved in 1420 episodes! Average Score: 246.21 Episode 1521 Average Score: 246.19 Environment solved in 1421 episodes! Average Score: 246.19 Episode 1522 Average Score: 246.08 Environment solved in 1422 episodes! Average Score: 246.08 Episode 1523 Average Score: 245.71 Environment solved in 1423 episodes! Average Score: 245.71 Episode 1524 Average Score: 248.11 Environment solved in 1424 episodes! Average Score: 248.11 Episode 1525 Average Score: 248.15 Environment solved in 1425 episodes! Average Score: 248.15 Episode 1526 Average Score: 248.38 Environment solved in 1426 episodes! Average Score: 248.38 Episode 1527 Average Score: 248.06 Environment solved in 1427 episodes! Average Score: 248.06 Episode 1528 Average Score: 245.45 Environment solved in 1428 episodes! Average Score: 245.45 Episode 1529 Average Score: 245.48 Environment solved in 1429 episodes! Average Score: 245.48 Episode 1530 Average Score: 244.95 Environment solved in 1430 episodes! Average Score: 244.95 Episode 1531 Average Score: 244.61 Environment solved in 1431 episodes! Average Score: 244.61 Episode 1532 Average Score: 238.79 Environment solved in 1432 episodes! Average Score: 238.79 Episode 1533 Average Score: 238.24 Environment solved in 1433 episodes! Average Score: 238.24 Episode 1534 Average Score: 238.15 Environment solved in 1434 episodes! Average Score: 238.15 Episode 1535 Average Score: 238.34 Environment solved in 1435 episodes! Average Score: 238.34 Episode 1536 Average Score: 237.79 Environment solved in 1436 episodes! Average Score: 237.79 Episode 1537 Average Score: 238.22 Environment solved in 1437 episodes! Average Score: 238.22 Episode 1538 Average Score: 238.39 Environment solved in 1438 episodes! Average Score: 238.39 Episode 1539 Average Score: 238.46 Environment solved in 1439 episodes! Average Score: 238.46 Episode 1540 Average Score: 238.74 Environment solved in 1440 episodes! Average Score: 238.74 Episode 1541 Average Score: 238.08 Environment solved in 1441 episodes! Average Score: 238.08 Episode 1542 Average Score: 237.91 Environment solved in 1442 episodes! Average Score: 237.91 Episode 1543 Average Score: 239.44 Environment solved in 1443 episodes! Average Score: 239.44 Episode 1544 Average Score: 239.60 Environment solved in 1444 episodes! Average Score: 239.60 Episode 1545 Average Score: 239.38 Environment solved in 1445 episodes! Average Score: 239.38 Episode 1546 Average Score: 239.11 Environment solved in 1446 episodes! Average Score: 239.11 Episode 1547 Average Score: 238.41 Environment solved in 1447 episodes! Average Score: 238.41 Episode 1548 Average Score: 238.64 Environment solved in 1448 episodes! Average Score: 238.64 Episode 1549 Average Score: 238.13 Environment solved in 1449 episodes! Average Score: 238.13 Episode 1550 Average Score: 238.00 Environment solved in 1450 episodes! Average Score: 238.00 Episode 1551 Average Score: 237.69 Environment solved in 1451 episodes! Average Score: 237.69 Episode 1552 Average Score: 237.40 Environment solved in 1452 episodes! Average Score: 237.40 Episode 1553 Average Score: 237.68 Environment solved in 1453 episodes! Average Score: 237.68 Episode 1554 Average Score: 237.58 Environment solved in 1454 episodes! Average Score: 237.58 Episode 1555 Average Score: 237.20 Environment solved in 1455 episodes! Average Score: 237.20 Episode 1556 Average Score: 237.24 Environment solved in 1456 episodes! Average Score: 237.24 Episode 1557 Average Score: 236.97 Environment solved in 1457 episodes! Average Score: 236.97 Episode 1558 Average Score: 236.89 Environment solved in 1458 episodes! Average Score: 236.89 Episode 1559 Average Score: 237.02 Environment solved in 1459 episodes! Average Score: 237.02 Episode 1560 Average Score: 237.15 Environment solved in 1460 episodes! Average Score: 237.15 Episode 1561 Average Score: 236.34 Environment solved in 1461 episodes! Average Score: 236.34 Episode 1562 Average Score: 236.92 Environment solved in 1462 episodes! Average Score: 236.92 Episode 1563 Average Score: 237.26 Environment solved in 1463 episodes! Average Score: 237.26 Episode 1564 Average Score: 237.11 Environment solved in 1464 episodes! Average Score: 237.11 Episode 1565 Average Score: 234.88 Environment solved in 1465 episodes! Average Score: 234.88 Episode 1566 Average Score: 234.93 Environment solved in 1466 episodes! Average Score: 234.93 Episode 1567 Average Score: 234.35 Environment solved in 1467 episodes! Average Score: 234.35 Episode 1568 Average Score: 232.90 Environment solved in 1468 episodes! Average Score: 232.90 Episode 1569 Average Score: 230.79 Environment solved in 1469 episodes! Average Score: 230.79 Episode 1570 Average Score: 231.40 Environment solved in 1470 episodes! Average Score: 231.40 Episode 1571 Average Score: 231.29 Environment solved in 1471 episodes! Average Score: 231.29 Episode 1572 Average Score: 230.64 Environment solved in 1472 episodes! Average Score: 230.64 Episode 1573 Average Score: 230.37 Environment solved in 1473 episodes! Average Score: 230.37 Episode 1574 Average Score: 229.57 Environment solved in 1474 episodes! Average Score: 229.57 Episode 1575 Average Score: 229.08 Environment solved in 1475 episodes! Average Score: 229.08 Episode 1576 Average Score: 228.85 Environment solved in 1476 episodes! Average Score: 228.85 Episode 1577 Average Score: 227.06 Environment solved in 1477 episodes! Average Score: 227.06 Episode 1578 Average Score: 227.03 Environment solved in 1478 episodes! Average Score: 227.03 Episode 1579 Average Score: 227.20 Environment solved in 1479 episodes! Average Score: 227.20 Episode 1580 Average Score: 227.17 Environment solved in 1480 episodes! Average Score: 227.17 Episode 1581 Average Score: 227.26 Environment solved in 1481 episodes! Average Score: 227.26 Episode 1582 Average Score: 226.65 Environment solved in 1482 episodes! Average Score: 226.65 Episode 1583 Average Score: 226.95 Environment solved in 1483 episodes! Average Score: 226.95 Episode 1584 Average Score: 226.61 Environment solved in 1484 episodes! Average Score: 226.61 Episode 1585 Average Score: 226.32 Environment solved in 1485 episodes! Average Score: 226.32 Episode 1586 Average Score: 226.04 Environment solved in 1486 episodes! Average Score: 226.04 Episode 1587 Average Score: 226.49 Environment solved in 1487 episodes! Average Score: 226.49 Episode 1588 Average Score: 227.15 Environment solved in 1488 episodes! Average Score: 227.15 Episode 1589 Average Score: 227.83 Environment solved in 1489 episodes! Average Score: 227.83 Episode 1590 Average Score: 227.91 Environment solved in 1490 episodes! Average Score: 227.91 Episode 1591 Average Score: 228.66 Environment solved in 1491 episodes! Average Score: 228.66 Episode 1592 Average Score: 228.98 Environment solved in 1492 episodes! Average Score: 228.98 Episode 1593 Average Score: 228.94 Environment solved in 1493 episodes! Average Score: 228.94 Episode 1594 Average Score: 228.97 Environment solved in 1494 episodes! Average Score: 228.97 Episode 1595 Average Score: 229.08 Environment solved in 1495 episodes! Average Score: 229.08 Episode 1596 Average Score: 228.54 Environment solved in 1496 episodes! Average Score: 228.54 Episode 1597 Average Score: 225.87 Environment solved in 1497 episodes! Average Score: 225.87 Episode 1598 Average Score: 226.27 Environment solved in 1498 episodes! Average Score: 226.27 Episode 1599 Average Score: 225.92 Environment solved in 1499 episodes! Average Score: 225.92 Episode 1600 Average Score: 226.16 Environment solved in 1500 episodes! Average Score: 226.16 Episode 1601 Average Score: 224.18 Environment solved in 1501 episodes! Average Score: 224.18 Episode 1602 Average Score: 224.17 Environment solved in 1502 episodes! Average Score: 224.17 Episode 1603 Average Score: 223.88 Environment solved in 1503 episodes! Average Score: 223.88 Episode 1604 Average Score: 223.19 Environment solved in 1504 episodes! Average Score: 223.19 Episode 1605 Average Score: 223.22 Environment solved in 1505 episodes! Average Score: 223.22 Episode 1606 Average Score: 223.64 Environment solved in 1506 episodes! Average Score: 223.64 Episode 1607 Average Score: 224.38 Environment solved in 1507 episodes! Average Score: 224.38 Episode 1608 Average Score: 224.31 Environment solved in 1508 episodes! Average Score: 224.31 Episode 1609 Average Score: 224.65 Environment solved in 1509 episodes! Average Score: 224.65 Episode 1610 Average Score: 221.92 Environment solved in 1510 episodes! Average Score: 221.92 Episode 1611 Average Score: 221.28 Environment solved in 1511 episodes! Average Score: 221.28 Episode 1612 Average Score: 221.33 Environment solved in 1512 episodes! Average Score: 221.33 Episode 1613 Average Score: 221.43 Environment solved in 1513 episodes! Average Score: 221.43 Episode 1614 Average Score: 219.90 Environment solved in 1514 episodes! Average Score: 219.90 Episode 1615 Average Score: 219.91 Environment solved in 1515 episodes! Average Score: 219.91 Episode 1616 Average Score: 219.91 Environment solved in 1516 episodes! Average Score: 219.91 Episode 1617 Average Score: 219.90 Environment solved in 1517 episodes! Average Score: 219.90 Episode 1618 Average Score: 220.11 Environment solved in 1518 episodes! Average Score: 220.11 Episode 1619 Average Score: 219.75 Environment solved in 1519 episodes! Average Score: 219.75 Episode 1620 Average Score: 219.50 Environment solved in 1520 episodes! Average Score: 219.50 Episode 1621 Average Score: 219.53 Environment solved in 1521 episodes! Average Score: 219.53 Episode 1622 Average Score: 218.94 Environment solved in 1522 episodes! Average Score: 218.94 Episode 1623 Average Score: 219.73 Environment solved in 1523 episodes! Average Score: 219.73 Episode 1624 Average Score: 218.93 Environment solved in 1524 episodes! Average Score: 218.93 Episode 1625 Average Score: 218.97 Environment solved in 1525 episodes! Average Score: 218.97 Episode 1626 Average Score: 218.73 Environment solved in 1526 episodes! Average Score: 218.73 Episode 1627 Average Score: 216.12 Environment solved in 1527 episodes! Average Score: 216.12 Episode 1628 Average Score: 218.20 Environment solved in 1528 episodes! Average Score: 218.20 Episode 1629 Average Score: 218.48 Environment solved in 1529 episodes! Average Score: 218.48 Episode 1630 Average Score: 218.25 Environment solved in 1530 episodes! Average Score: 218.25 Episode 1631 Average Score: 219.17 Environment solved in 1531 episodes! Average Score: 219.17 Episode 1632 Average Score: 225.29 Environment solved in 1532 episodes! Average Score: 225.29 Episode 1633 Average Score: 225.81 Environment solved in 1533 episodes! Average Score: 225.81 Episode 1634 Average Score: 225.81 Environment solved in 1534 episodes! Average Score: 225.81 Episode 1635 Average Score: 224.54 Environment solved in 1535 episodes! Average Score: 224.54 Episode 1636 Average Score: 224.53 Environment solved in 1536 episodes! Average Score: 224.53 Episode 1637 Average Score: 224.45 Environment solved in 1537 episodes! Average Score: 224.45 Episode 1638 Average Score: 224.35 Environment solved in 1538 episodes! Average Score: 224.35 Episode 1639 Average Score: 223.89 Environment solved in 1539 episodes! Average Score: 223.89 Episode 1640 Average Score: 221.85 Environment solved in 1540 episodes! Average Score: 221.85 Episode 1641 Average Score: 222.12 Environment solved in 1541 episodes! Average Score: 222.12 Episode 1642 Average Score: 222.21 Environment solved in 1542 episodes! Average Score: 222.21 Episode 1643 Average Score: 223.27 Environment solved in 1543 episodes! Average Score: 223.27 Episode 1644 Average Score: 222.84 Environment solved in 1544 episodes! Average Score: 222.84 Episode 1645 Average Score: 223.11 Environment solved in 1545 episodes! Average Score: 223.11 Episode 1646 Average Score: 223.70 Environment solved in 1546 episodes! Average Score: 223.70 Episode 1647 Average Score: 224.30 Environment solved in 1547 episodes! Average Score: 224.30 Episode 1648 Average Score: 224.13 Environment solved in 1548 episodes! Average Score: 224.13 Episode 1649 Average Score: 223.78 Environment solved in 1549 episodes! Average Score: 223.78 Episode 1650 Average Score: 224.19 Environment solved in 1550 episodes! Average Score: 224.19 Episode 1651 Average Score: 221.64 Environment solved in 1551 episodes! Average Score: 221.64 Episode 1652 Average Score: 221.99 Environment solved in 1552 episodes! Average Score: 221.99 Episode 1653 Average Score: 222.29 Environment solved in 1553 episodes! Average Score: 222.29 Episode 1654 Average Score: 220.21 Environment solved in 1554 episodes! Average Score: 220.21 Episode 1655 Average Score: 219.88 Environment solved in 1555 episodes! Average Score: 219.88 Episode 1656 Average Score: 219.22 Environment solved in 1556 episodes! Average Score: 219.22 Episode 1657 Average Score: 219.86 Environment solved in 1557 episodes! Average Score: 219.86 Episode 1658 Average Score: 219.86 Environment solved in 1558 episodes! Average Score: 219.86 Episode 1659 Average Score: 219.84 Environment solved in 1559 episodes! Average Score: 219.84 Episode 1660 Average Score: 219.26 Environment solved in 1560 episodes! Average Score: 219.26 Episode 1661 Average Score: 219.84 Environment solved in 1561 episodes! Average Score: 219.84 Episode 1662 Average Score: 219.37 Environment solved in 1562 episodes! Average Score: 219.37 Episode 1663 Average Score: 218.84 Environment solved in 1563 episodes! Average Score: 218.84 Episode 1664 Average Score: 218.10 Environment solved in 1564 episodes! Average Score: 218.10 Episode 1665 Average Score: 220.42 Environment solved in 1565 episodes! Average Score: 220.42 Episode 1666 Average Score: 220.12 Environment solved in 1566 episodes! Average Score: 220.12 Episode 1667 Average Score: 220.33 Environment solved in 1567 episodes! Average Score: 220.33 Episode 1668 Average Score: 221.75 Environment solved in 1568 episodes! Average Score: 221.75 Episode 1669 Average Score: 223.65 Environment solved in 1569 episodes! Average Score: 223.65 Episode 1670 Average Score: 222.97 Environment solved in 1570 episodes! Average Score: 222.97 Episode 1671 Average Score: 223.05 Environment solved in 1571 episodes! Average Score: 223.05 Episode 1672 Average Score: 222.76 Environment solved in 1572 episodes! Average Score: 222.76 Episode 1673 Average Score: 221.90 Environment solved in 1573 episodes! Average Score: 221.90 Episode 1674 Average Score: 222.31 Environment solved in 1574 episodes! Average Score: 222.31 Episode 1675 Average Score: 222.31 Environment solved in 1575 episodes! Average Score: 222.31 Episode 1676 Average Score: 222.17 Environment solved in 1576 episodes! Average Score: 222.17 Episode 1677 Average Score: 225.76 Environment solved in 1577 episodes! Average Score: 225.76 Episode 1678 Average Score: 225.53 Environment solved in 1578 episodes! Average Score: 225.53 Episode 1679 Average Score: 225.21 Environment solved in 1579 episodes! Average Score: 225.21 Episode 1680 Average Score: 223.82 Environment solved in 1580 episodes! Average Score: 223.82 Episode 1681 Average Score: 224.00 Environment solved in 1581 episodes! Average Score: 224.00 Episode 1682 Average Score: 224.50 Environment solved in 1582 episodes! Average Score: 224.50 Episode 1683 Average Score: 224.27 Environment solved in 1583 episodes! Average Score: 224.27 Episode 1684 Average Score: 224.50 Environment solved in 1584 episodes! Average Score: 224.50 Episode 1685 Average Score: 223.87 Environment solved in 1585 episodes! Average Score: 223.87 Episode 1686 Average Score: 224.21 Environment solved in 1586 episodes! Average Score: 224.21 Episode 1687 Average Score: 223.90 Environment solved in 1587 episodes! Average Score: 223.90 Episode 1688 Average Score: 223.19 Environment solved in 1588 episodes! Average Score: 223.19 Episode 1689 Average Score: 220.49 Environment solved in 1589 episodes! Average Score: 220.49 Episode 1690 Average Score: 220.18 Environment solved in 1590 episodes! Average Score: 220.18 Episode 1691 Average Score: 220.19 Environment solved in 1591 episodes! Average Score: 220.19 Episode 1692 Average Score: 219.29 Environment solved in 1592 episodes! Average Score: 219.29 Episode 1693 Average Score: 218.87 Environment solved in 1593 episodes! Average Score: 218.87 Episode 1694 Average Score: 218.40 Environment solved in 1594 episodes! Average Score: 218.40 Episode 1695 Average Score: 218.04 Environment solved in 1595 episodes! Average Score: 218.04 Episode 1696 Average Score: 218.60 Environment solved in 1596 episodes! Average Score: 218.60 Episode 1697 Average Score: 220.91 Environment solved in 1597 episodes! Average Score: 220.91 Episode 1698 Average Score: 220.56 Environment solved in 1598 episodes! Average Score: 220.56 Episode 1699 Average Score: 220.59 Environment solved in 1599 episodes! Average Score: 220.59 Episode 1700 Average Score: 220.64 Environment solved in 1600 episodes! Average Score: 220.64 Episode 1701 Average Score: 222.84 Environment solved in 1601 episodes! Average Score: 222.84 Episode 1702 Average Score: 221.08 Environment solved in 1602 episodes! Average Score: 221.08 Episode 1703 Average Score: 220.85 Environment solved in 1603 episodes! Average Score: 220.85 Episode 1704 Average Score: 221.61 Environment solved in 1604 episodes! Average Score: 221.61 Episode 1705 Average Score: 220.76 Environment solved in 1605 episodes! Average Score: 220.76 Episode 1706 Average Score: 220.80 Environment solved in 1606 episodes! Average Score: 220.80 Episode 1707 Average Score: 219.98 Environment solved in 1607 episodes! Average Score: 219.98 Episode 1708 Average Score: 217.50 Environment solved in 1608 episodes! Average Score: 217.50 Episode 1709 Average Score: 216.74 Environment solved in 1609 episodes! Average Score: 216.74 Episode 1710 Average Score: 219.28 Environment solved in 1610 episodes! Average Score: 219.28 Episode 1711 Average Score: 219.38 Environment solved in 1611 episodes! Average Score: 219.38 Episode 1712 Average Score: 219.36 Environment solved in 1612 episodes! Average Score: 219.36 Episode 1713 Average Score: 219.30 Environment solved in 1613 episodes! Average Score: 219.30 Episode 1714 Average Score: 221.56 Environment solved in 1614 episodes! Average Score: 221.56 Episode 1715 Average Score: 218.66 Environment solved in 1615 episodes! Average Score: 218.66 Episode 1716 Average Score: 217.44 Environment solved in 1616 episodes! Average Score: 217.44 Episode 1717 Average Score: 217.77 Environment solved in 1617 episodes! Average Score: 217.77 Episode 1718 Average Score: 217.50 Environment solved in 1618 episodes! Average Score: 217.50 Episode 1719 Average Score: 217.15 Environment solved in 1619 episodes! Average Score: 217.15 Episode 1720 Average Score: 217.28 Environment solved in 1620 episodes! Average Score: 217.28 Episode 1721 Average Score: 217.30 Environment solved in 1621 episodes! Average Score: 217.30 Episode 1722 Average Score: 217.77 Environment solved in 1622 episodes! Average Score: 217.77 Episode 1723 Average Score: 216.75 Environment solved in 1623 episodes! Average Score: 216.75 Episode 1724 Average Score: 217.16 Environment solved in 1624 episodes! Average Score: 217.16 Episode 1725 Average Score: 216.71 Environment solved in 1625 episodes! Average Score: 216.71 Episode 1726 Average Score: 216.82 Environment solved in 1626 episodes! Average Score: 216.82 Episode 1727 Average Score: 219.90 Environment solved in 1627 episodes! Average Score: 219.90 Episode 1728 Average Score: 219.94 Environment solved in 1628 episodes! Average Score: 219.94 Episode 1729 Average Score: 219.81 Environment solved in 1629 episodes! Average Score: 219.81 Episode 1730 Average Score: 220.32 Environment solved in 1630 episodes! Average Score: 220.32 Episode 1731 Average Score: 219.88 Environment solved in 1631 episodes! Average Score: 219.88 Episode 1732 Average Score: 219.29 Environment solved in 1632 episodes! Average Score: 219.29 Episode 1733 Average Score: 219.11 Environment solved in 1633 episodes! Average Score: 219.11 Episode 1734 Average Score: 219.58 Environment solved in 1634 episodes! Average Score: 219.58 Episode 1735 Average Score: 220.27 Environment solved in 1635 episodes! Average Score: 220.27 Episode 1736 Average Score: 218.75 Environment solved in 1636 episodes! Average Score: 218.75 Episode 1737 Average Score: 218.83 Environment solved in 1637 episodes! Average Score: 218.83 Episode 1738 Average Score: 218.85 Environment solved in 1638 episodes! Average Score: 218.85 Episode 1739 Average Score: 219.22 Environment solved in 1639 episodes! Average Score: 219.22 Episode 1740 Average Score: 220.98 Environment solved in 1640 episodes! Average Score: 220.98 Episode 1741 Average Score: 221.17 Environment solved in 1641 episodes! Average Score: 221.17 Episode 1742 Average Score: 221.45 Environment solved in 1642 episodes! Average Score: 221.45 Episode 1743 Average Score: 221.46 Environment solved in 1643 episodes! Average Score: 221.46 Episode 1744 Average Score: 221.48 Environment solved in 1644 episodes! Average Score: 221.48 Episode 1745 Average Score: 220.96 Environment solved in 1645 episodes! Average Score: 220.96 Episode 1746 Average Score: 220.36 Environment solved in 1646 episodes! Average Score: 220.36 Episode 1747 Average Score: 220.19 Environment solved in 1647 episodes! Average Score: 220.19 Episode 1748 Average Score: 220.06 Environment solved in 1648 episodes! Average Score: 220.06 Episode 1749 Average Score: 220.68 Environment solved in 1649 episodes! Average Score: 220.68 Episode 1750 Average Score: 220.71 Environment solved in 1650 episodes! Average Score: 220.71 Episode 1751 Average Score: 222.67 Environment solved in 1651 episodes! Average Score: 222.67 Episode 1752 Average Score: 222.18 Environment solved in 1652 episodes! Average Score: 222.18 Episode 1753 Average Score: 221.39 Environment solved in 1653 episodes! Average Score: 221.39 Episode 1754 Average Score: 223.43 Environment solved in 1654 episodes! Average Score: 223.43 Episode 1755 Average Score: 223.69 Environment solved in 1655 episodes! Average Score: 223.69 Episode 1756 Average Score: 224.64 Environment solved in 1656 episodes! Average Score: 224.64 Episode 1757 Average Score: 224.77 Environment solved in 1657 episodes! Average Score: 224.77 Episode 1758 Average Score: 224.46 Environment solved in 1658 episodes! Average Score: 224.46 Episode 1759 Average Score: 224.20 Environment solved in 1659 episodes! Average Score: 224.20 Episode 1760 Average Score: 224.19 Environment solved in 1660 episodes! Average Score: 224.19 Episode 1761 Average Score: 223.50 Environment solved in 1661 episodes! Average Score: 223.50 Episode 1762 Average Score: 223.32 Environment solved in 1662 episodes! Average Score: 223.32 Episode 1763 Average Score: 222.86 Environment solved in 1663 episodes! Average Score: 222.86 Episode 1764 Average Score: 224.05 Environment solved in 1664 episodes! Average Score: 224.05 Episode 1765 Average Score: 221.26 Environment solved in 1665 episodes! Average Score: 221.26 Episode 1766 Average Score: 221.15 Environment solved in 1666 episodes! Average Score: 221.15 Episode 1767 Average Score: 220.96 Environment solved in 1667 episodes! Average Score: 220.96 Episode 1768 Average Score: 220.79 Environment solved in 1668 episodes! Average Score: 220.79 Episode 1769 Average Score: 218.50 Environment solved in 1669 episodes! Average Score: 218.50 Episode 1770 Average Score: 218.31 Environment solved in 1670 episodes! Average Score: 218.31 Episode 1771 Average Score: 215.63 Environment solved in 1671 episodes! Average Score: 215.63 Episode 1772 Average Score: 215.95 Environment solved in 1672 episodes! Average Score: 215.95 Episode 1773 Average Score: 216.85 Environment solved in 1673 episodes! Average Score: 216.85 Episode 1774 Average Score: 217.36 Environment solved in 1674 episodes! Average Score: 217.36 Episode 1775 Average Score: 214.63 Environment solved in 1675 episodes! Average Score: 214.63 Episode 1776 Average Score: 214.74 Environment solved in 1676 episodes! Average Score: 214.74 Episode 1777 Average Score: 215.34 Environment solved in 1677 episodes! Average Score: 215.34 Episode 1778 Average Score: 215.20 Environment solved in 1678 episodes! Average Score: 215.20 Episode 1779 Average Score: 215.43 Environment solved in 1679 episodes! Average Score: 215.43 Episode 1780 Average Score: 214.52 Environment solved in 1680 episodes! Average Score: 214.52 Episode 1781 Average Score: 214.59 Environment solved in 1681 episodes! Average Score: 214.59 Episode 1782 Average Score: 215.11 Environment solved in 1682 episodes! Average Score: 215.11 Episode 1783 Average Score: 214.42 Environment solved in 1683 episodes! Average Score: 214.42 Episode 1784 Average Score: 214.34 Environment solved in 1684 episodes! Average Score: 214.34 Episode 1785 Average Score: 215.14 Environment solved in 1685 episodes! Average Score: 215.14 Episode 1786 Average Score: 215.37 Environment solved in 1686 episodes! Average Score: 215.37 Episode 1787 Average Score: 214.85 Environment solved in 1687 episodes! Average Score: 214.85 Episode 1788 Average Score: 215.38 Environment solved in 1688 episodes! Average Score: 215.38 Episode 1789 Average Score: 217.62 Environment solved in 1689 episodes! Average Score: 217.62 Episode 1790 Average Score: 218.24 Environment solved in 1690 episodes! Average Score: 218.24 Episode 1791 Average Score: 217.57 Environment solved in 1691 episodes! Average Score: 217.57 Episode 1792 Average Score: 218.66 Environment solved in 1692 episodes! Average Score: 218.66 Episode 1793 Average Score: 218.87 Environment solved in 1693 episodes! Average Score: 218.87 Episode 1794 Average Score: 219.71 Environment solved in 1694 episodes! Average Score: 219.71 Episode 1795 Average Score: 220.08 Environment solved in 1695 episodes! Average Score: 220.08 Episode 1796 Average Score: 220.00 Environment solved in 1696 episodes! Average Score: 220.00 Episode 1797 Average Score: 220.27 Environment solved in 1697 episodes! Average Score: 220.27 Episode 1798 Average Score: 220.24 Environment solved in 1698 episodes! Average Score: 220.24 Episode 1799 Average Score: 220.04 Environment solved in 1699 episodes! Average Score: 220.04 Episode 1800 Average Score: 219.60 Environment solved in 1700 episodes! Average Score: 219.60 Episode 1801 Average Score: 219.57 Environment solved in 1701 episodes! Average Score: 219.57 Episode 1802 Average Score: 221.19 Environment solved in 1702 episodes! Average Score: 221.19 Episode 1803 Average Score: 221.04 Environment solved in 1703 episodes! Average Score: 221.04 Episode 1804 Average Score: 221.02 Environment solved in 1704 episodes! Average Score: 221.02 Episode 1805 Average Score: 221.88 Environment solved in 1705 episodes! Average Score: 221.88 Episode 1806 Average Score: 221.47 Environment solved in 1706 episodes! Average Score: 221.47 Episode 1807 Average Score: 222.11 Environment solved in 1707 episodes! Average Score: 222.11 Episode 1808 Average Score: 221.90 Environment solved in 1708 episodes! Average Score: 221.90 Episode 1809 Average Score: 221.56 Environment solved in 1709 episodes! Average Score: 221.56 Episode 1810 Average Score: 221.67 Environment solved in 1710 episodes! Average Score: 221.67 Episode 1811 Average Score: 222.21 Environment solved in 1711 episodes! Average Score: 222.21 Episode 1812 Average Score: 221.79 Environment solved in 1712 episodes! Average Score: 221.79 Episode 1813 Average Score: 219.57 Environment solved in 1713 episodes! Average Score: 219.57 Episode 1814 Average Score: 219.38 Environment solved in 1714 episodes! Average Score: 219.38 Episode 1815 Average Score: 222.30 Environment solved in 1715 episodes! Average Score: 222.30 Episode 1816 Average Score: 223.71 Environment solved in 1716 episodes! Average Score: 223.71 Episode 1817 Average Score: 224.09 Environment solved in 1717 episodes! Average Score: 224.09 Episode 1818 Average Score: 224.10 Environment solved in 1718 episodes! Average Score: 224.10 Episode 1819 Average Score: 224.59 Environment solved in 1719 episodes! Average Score: 224.59 Episode 1820 Average Score: 224.40 Environment solved in 1720 episodes! Average Score: 224.40 Episode 1821 Average Score: 223.98 Environment solved in 1721 episodes! Average Score: 223.98 Episode 1822 Average Score: 221.92 Environment solved in 1722 episodes! Average Score: 221.92 Episode 1823 Average Score: 222.16 Environment solved in 1723 episodes! Average Score: 222.16 Episode 1824 Average Score: 222.23 Environment solved in 1724 episodes! Average Score: 222.23 Episode 1825 Average Score: 222.31 Environment solved in 1725 episodes! Average Score: 222.31 Episode 1826 Average Score: 222.39 Environment solved in 1726 episodes! Average Score: 222.39 Episode 1827 Average Score: 222.09 Environment solved in 1727 episodes! Average Score: 222.09 Episode 1828 Average Score: 222.12 Environment solved in 1728 episodes! Average Score: 222.12 Episode 1829 Average Score: 221.89 Environment solved in 1729 episodes! Average Score: 221.89 Episode 1830 Average Score: 222.07 Environment solved in 1730 episodes! Average Score: 222.07 Episode 1831 Average Score: 222.42 Environment solved in 1731 episodes! Average Score: 222.42 Episode 1832 Average Score: 223.02 Environment solved in 1732 episodes! Average Score: 223.02 Episode 1833 Average Score: 222.69 Environment solved in 1733 episodes! Average Score: 222.69 Episode 1834 Average Score: 222.72 Environment solved in 1734 episodes! Average Score: 222.72 Episode 1835 Average Score: 222.91 Environment solved in 1735 episodes! Average Score: 222.91 Episode 1836 Average Score: 224.84 Environment solved in 1736 episodes! Average Score: 224.84 Episode 1837 Average Score: 224.50 Environment solved in 1737 episodes! Average Score: 224.50 Episode 1838 Average Score: 224.43 Environment solved in 1738 episodes! Average Score: 224.43 Episode 1839 Average Score: 224.56 Environment solved in 1739 episodes! Average Score: 224.56 Episode 1840 Average Score: 222.50 Environment solved in 1740 episodes! Average Score: 222.50 Episode 1841 Average Score: 222.50 Environment solved in 1741 episodes! Average Score: 222.50 Episode 1842 Average Score: 221.95 Environment solved in 1742 episodes! Average Score: 221.95 Episode 1843 Average Score: 219.47 Environment solved in 1743 episodes! Average Score: 219.47 Episode 1844 Average Score: 220.08 Environment solved in 1744 episodes! Average Score: 220.08 Episode 1845 Average Score: 220.67 Environment solved in 1745 episodes! Average Score: 220.67 Episode 1846 Average Score: 221.18 Environment solved in 1746 episodes! Average Score: 221.18 Episode 1847 Average Score: 221.10 Environment solved in 1747 episodes! Average Score: 221.10 Episode 1848 Average Score: 220.82 Environment solved in 1748 episodes! Average Score: 220.82 Episode 1849 Average Score: 220.38 Environment solved in 1749 episodes! Average Score: 220.38 Episode 1850 Average Score: 220.52 Environment solved in 1750 episodes! Average Score: 220.52 Episode 1851 Average Score: 221.24 Environment solved in 1751 episodes! Average Score: 221.24 Episode 1852 Average Score: 221.08 Environment solved in 1752 episodes! Average Score: 221.08 Episode 1853 Average Score: 221.40 Environment solved in 1753 episodes! Average Score: 221.40 Episode 1854 Average Score: 221.52 Environment solved in 1754 episodes! Average Score: 221.52 Episode 1855 Average Score: 221.58 Environment solved in 1755 episodes! Average Score: 221.58 Episode 1856 Average Score: 221.40 Environment solved in 1756 episodes! Average Score: 221.40 Episode 1857 Average Score: 221.42 Environment solved in 1757 episodes! Average Score: 221.42 Episode 1858 Average Score: 221.93 Environment solved in 1758 episodes! Average Score: 221.93 Episode 1859 Average Score: 221.79 Environment solved in 1759 episodes! Average Score: 221.79 Episode 1860 Average Score: 222.39 Environment solved in 1760 episodes! Average Score: 222.39 Episode 1861 Average Score: 222.53 Environment solved in 1761 episodes! Average Score: 222.53 Episode 1862 Average Score: 222.79 Environment solved in 1762 episodes! Average Score: 222.79 Episode 1863 Average Score: 223.37 Environment solved in 1763 episodes! Average Score: 223.37 Episode 1864 Average Score: 220.53 Environment solved in 1764 episodes! Average Score: 220.53 Episode 1865 Average Score: 223.06 Environment solved in 1765 episodes! Average Score: 223.06 Episode 1866 Average Score: 223.00 Environment solved in 1766 episodes! Average Score: 223.00 Episode 1867 Average Score: 222.86 Environment solved in 1767 episodes! Average Score: 222.86 Episode 1868 Average Score: 222.68 Environment solved in 1768 episodes! Average Score: 222.68 Episode 1869 Average Score: 224.98 Environment solved in 1769 episodes! Average Score: 224.98 Episode 1870 Average Score: 225.86 Environment solved in 1770 episodes! Average Score: 225.86 Episode 1871 Average Score: 228.65 Environment solved in 1771 episodes! Average Score: 228.65 Episode 1872 Average Score: 228.54 Environment solved in 1772 episodes! Average Score: 228.54 Episode 1873 Average Score: 228.38 Environment solved in 1773 episodes! Average Score: 228.38 Episode 1874 Average Score: 228.33 Environment solved in 1774 episodes! Average Score: 228.33 Episode 1875 Average Score: 231.12 Environment solved in 1775 episodes! Average Score: 231.12 Episode 1876 Average Score: 231.28 Environment solved in 1776 episodes! Average Score: 231.28 Episode 1877 Average Score: 230.85 Environment solved in 1777 episodes! Average Score: 230.85 Episode 1878 Average Score: 228.58 Environment solved in 1778 episodes! Average Score: 228.58 Episode 1879 Average Score: 228.24 Environment solved in 1779 episodes! Average Score: 228.24 Episode 1880 Average Score: 230.05 Environment solved in 1780 episodes! Average Score: 230.05 Episode 1881 Average Score: 229.60 Environment solved in 1781 episodes! Average Score: 229.60 Episode 1882 Average Score: 229.53 Environment solved in 1782 episodes! Average Score: 229.53 Episode 1883 Average Score: 229.94 Environment solved in 1783 episodes! Average Score: 229.94 Episode 1884 Average Score: 230.16 Environment solved in 1784 episodes! Average Score: 230.16 Episode 1885 Average Score: 229.72 Environment solved in 1785 episodes! Average Score: 229.72 Episode 1886 Average Score: 229.77 Environment solved in 1786 episodes! Average Score: 229.77 Episode 1887 Average Score: 225.66 Environment solved in 1787 episodes! Average Score: 225.66 Episode 1888 Average Score: 225.00 Environment solved in 1788 episodes! Average Score: 225.00 Episode 1889 Average Score: 224.22 Environment solved in 1789 episodes! Average Score: 224.22 Episode 1890 Average Score: 221.59 Environment solved in 1790 episodes! Average Score: 221.59 Episode 1891 Average Score: 221.75 Environment solved in 1791 episodes! Average Score: 221.75 Episode 1892 Average Score: 221.40 Environment solved in 1792 episodes! Average Score: 221.40 Episode 1893 Average Score: 221.97 Environment solved in 1793 episodes! Average Score: 221.97 Episode 1894 Average Score: 221.72 Environment solved in 1794 episodes! Average Score: 221.72 Episode 1895 Average Score: 221.29 Environment solved in 1795 episodes! Average Score: 221.29 Episode 1896 Average Score: 218.71 Environment solved in 1796 episodes! Average Score: 218.71 Episode 1897 Average Score: 218.14 Environment solved in 1797 episodes! Average Score: 218.14 Episode 1898 Average Score: 218.48 Environment solved in 1798 episodes! Average Score: 218.48 Episode 1899 Average Score: 217.36 Environment solved in 1799 episodes! Average Score: 217.36 Episode 1900 Average Score: 218.01 Environment solved in 1800 episodes! Average Score: 218.01 Episode 1901 Average Score: 218.24 Environment solved in 1801 episodes! Average Score: 218.24 Episode 1902 Average Score: 218.83 Environment solved in 1802 episodes! Average Score: 218.83 Episode 1903 Average Score: 219.02 Environment solved in 1803 episodes! Average Score: 219.02 Episode 1904 Average Score: 219.11 Environment solved in 1804 episodes! Average Score: 219.11 Episode 1905 Average Score: 218.65 Environment solved in 1805 episodes! Average Score: 218.65 Episode 1906 Average Score: 219.13 Environment solved in 1806 episodes! Average Score: 219.13 Episode 1907 Average Score: 219.02 Environment solved in 1807 episodes! Average Score: 219.02 Episode 1908 Average Score: 221.62 Environment solved in 1808 episodes! Average Score: 221.62 Episode 1909 Average Score: 222.32 Environment solved in 1809 episodes! Average Score: 222.32 Episode 1910 Average Score: 222.28 Environment solved in 1810 episodes! Average Score: 222.28 Episode 1911 Average Score: 222.62 Environment solved in 1811 episodes! Average Score: 222.62 Episode 1912 Average Score: 220.78 Environment solved in 1812 episodes! Average Score: 220.78 Episode 1913 Average Score: 223.38 Environment solved in 1813 episodes! Average Score: 223.38 Episode 1914 Average Score: 223.91 Environment solved in 1814 episodes! Average Score: 223.91 Episode 1915 Average Score: 223.99 Environment solved in 1815 episodes! Average Score: 223.99 Episode 1916 Average Score: 224.07 Environment solved in 1816 episodes! Average Score: 224.07 Episode 1917 Average Score: 224.16 Environment solved in 1817 episodes! Average Score: 224.16 Episode 1918 Average Score: 224.34 Environment solved in 1818 episodes! Average Score: 224.34 Episode 1919 Average Score: 224.03 Environment solved in 1819 episodes! Average Score: 224.03 Episode 1920 Average Score: 224.39 Environment solved in 1820 episodes! Average Score: 224.39 Episode 1921 Average Score: 224.70 Environment solved in 1821 episodes! Average Score: 224.70 Episode 1922 Average Score: 227.03 Environment solved in 1822 episodes! Average Score: 227.03 Episode 1923 Average Score: 227.75 Environment solved in 1823 episodes! Average Score: 227.75 Episode 1924 Average Score: 227.69 Environment solved in 1824 episodes! Average Score: 227.69 Episode 1925 Average Score: 227.81 Environment solved in 1825 episodes! Average Score: 227.81 Episode 1926 Average Score: 227.51 Environment solved in 1826 episodes! Average Score: 227.51 Episode 1927 Average Score: 227.79 Environment solved in 1827 episodes! Average Score: 227.79 Episode 1928 Average Score: 227.77 Environment solved in 1828 episodes! Average Score: 227.77 Episode 1929 Average Score: 228.22 Environment solved in 1829 episodes! Average Score: 228.22 Episode 1930 Average Score: 227.97 Environment solved in 1830 episodes! Average Score: 227.97 Episode 1931 Average Score: 227.80 Environment solved in 1831 episodes! Average Score: 227.80 Episode 1932 Average Score: 227.74 Environment solved in 1832 episodes! Average Score: 227.74 Episode 1933 Average Score: 228.56 Environment solved in 1833 episodes! Average Score: 228.56 Episode 1934 Average Score: 228.02 Environment solved in 1834 episodes! Average Score: 228.02 Episode 1935 Average Score: 227.71 Environment solved in 1835 episodes! Average Score: 227.71 Episode 1936 Average Score: 225.13 Environment solved in 1836 episodes! Average Score: 225.13 Episode 1937 Average Score: 222.75 Environment solved in 1837 episodes! Average Score: 222.75 Episode 1938 Average Score: 222.88 Environment solved in 1838 episodes! Average Score: 222.88 Episode 1939 Average Score: 223.03 Environment solved in 1839 episodes! Average Score: 223.03 Episode 1940 Average Score: 225.11 Environment solved in 1840 episodes! Average Score: 225.11 Episode 1941 Average Score: 224.82 Environment solved in 1841 episodes! Average Score: 224.82 Episode 1942 Average Score: 225.37 Environment solved in 1842 episodes! Average Score: 225.37 Episode 1943 Average Score: 225.36 Environment solved in 1843 episodes! Average Score: 225.36 Episode 1944 Average Score: 224.73 Environment solved in 1844 episodes! Average Score: 224.73 Episode 1945 Average Score: 223.10 Environment solved in 1845 episodes! Average Score: 223.10 Episode 1946 Average Score: 222.55 Environment solved in 1846 episodes! Average Score: 222.55 Episode 1947 Average Score: 223.19 Environment solved in 1847 episodes! Average Score: 223.19 Episode 1948 Average Score: 223.71 Environment solved in 1848 episodes! Average Score: 223.71 Episode 1949 Average Score: 223.97 Environment solved in 1849 episodes! Average Score: 223.97 Episode 1950 Average Score: 223.69 Environment solved in 1850 episodes! Average Score: 223.69 Episode 1951 Average Score: 223.99 Environment solved in 1851 episodes! Average Score: 223.99 Episode 1952 Average Score: 224.46 Environment solved in 1852 episodes! Average Score: 224.46 Episode 1953 Average Score: 224.23 Environment solved in 1853 episodes! Average Score: 224.23 Episode 1954 Average Score: 223.88 Environment solved in 1854 episodes! Average Score: 223.88 Episode 1955 Average Score: 223.81 Environment solved in 1855 episodes! Average Score: 223.81 Episode 1956 Average Score: 223.52 Environment solved in 1856 episodes! Average Score: 223.52 Episode 1957 Average Score: 222.99 Environment solved in 1857 episodes! Average Score: 222.99 Episode 1958 Average Score: 223.03 Environment solved in 1858 episodes! Average Score: 223.03 Episode 1959 Average Score: 223.41 Environment solved in 1859 episodes! Average Score: 223.41 Episode 1960 Average Score: 223.21 Environment solved in 1860 episodes! Average Score: 223.21 Episode 1961 Average Score: 223.52 Environment solved in 1861 episodes! Average Score: 223.52 Episode 1962 Average Score: 220.71 Environment solved in 1862 episodes! Average Score: 220.71 Episode 1963 Average Score: 220.69 Environment solved in 1863 episodes! Average Score: 220.69 Episode 1964 Average Score: 222.83 Environment solved in 1864 episodes! Average Score: 222.83 Episode 1965 Average Score: 223.08 Environment solved in 1865 episodes! Average Score: 223.08 Episode 1966 Average Score: 223.73 Environment solved in 1866 episodes! Average Score: 223.73 Episode 1967 Average Score: 223.70 Environment solved in 1867 episodes! Average Score: 223.70 Episode 1968 Average Score: 223.38 Environment solved in 1868 episodes! Average Score: 223.38 Episode 1969 Average Score: 223.82 Environment solved in 1869 episodes! Average Score: 223.82 Episode 1970 Average Score: 223.10 Environment solved in 1870 episodes! Average Score: 223.10 Episode 1971 Average Score: 222.69 Environment solved in 1871 episodes! Average Score: 222.69 Episode 1972 Average Score: 223.21 Environment solved in 1872 episodes! Average Score: 223.21 Episode 1973 Average Score: 223.38 Environment solved in 1873 episodes! Average Score: 223.38 Episode 1974 Average Score: 223.17 Environment solved in 1874 episodes! Average Score: 223.17 Episode 1975 Average Score: 220.96 Environment solved in 1875 episodes! Average Score: 220.96 Episode 1976 Average Score: 220.73 Environment solved in 1876 episodes! Average Score: 220.73 Episode 1977 Average Score: 221.37 Environment solved in 1877 episodes! Average Score: 221.37 Episode 1978 Average Score: 221.15 Environment solved in 1878 episodes! Average Score: 221.15 Episode 1979 Average Score: 221.32 Environment solved in 1879 episodes! Average Score: 221.32 Episode 1980 Average Score: 221.67 Environment solved in 1880 episodes! Average Score: 221.67 Episode 1981 Average Score: 221.86 Environment solved in 1881 episodes! Average Score: 221.86 Episode 1982 Average Score: 219.00 Environment solved in 1882 episodes! Average Score: 219.00 Episode 1983 Average Score: 219.22 Environment solved in 1883 episodes! Average Score: 219.22 Episode 1984 Average Score: 219.33 Environment solved in 1884 episodes! Average Score: 219.33 Episode 1985 Average Score: 220.11 Environment solved in 1885 episodes! Average Score: 220.11 Episode 1986 Average Score: 220.15 Environment solved in 1886 episodes! Average Score: 220.15 Episode 1987 Average Score: 222.36 Environment solved in 1887 episodes! Average Score: 222.36 Episode 1988 Average Score: 219.60 Environment solved in 1888 episodes! Average Score: 219.60 Episode 1989 Average Score: 221.02 Environment solved in 1889 episodes! Average Score: 221.02 Episode 1990 Average Score: 222.81 Environment solved in 1890 episodes! Average Score: 222.81 Episode 1991 Average Score: 223.14 Environment solved in 1891 episodes! Average Score: 223.14 Episode 1992 Average Score: 223.46 Environment solved in 1892 episodes! Average Score: 223.46 Episode 1993 Average Score: 223.46 Environment solved in 1893 episodes! Average Score: 223.46 Episode 1994 Average Score: 223.58 Environment solved in 1894 episodes! Average Score: 223.58 Episode 1995 Average Score: 224.29 Environment solved in 1895 episodes! Average Score: 224.29 Episode 1996 Average Score: 227.04 Environment solved in 1896 episodes! Average Score: 227.04 Episode 1997 Average Score: 224.99 Environment solved in 1897 episodes! Average Score: 224.99 Episode 1998 Average Score: 224.88 Environment solved in 1898 episodes! Average Score: 224.88 Episode 1999 Average Score: 226.56 Environment solved in 1899 episodes! Average Score: 226.56 Episode 2000 Average Score: 226.29 Environment solved in 1900 episodes! Average Score: 226.29 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -183.04 Episode 200 Average Score: -99.42 Episode 300 Average Score: -46.78 Episode 400 Average Score: -28.97 Episode 500 Average Score: 55.12 Episode 600 Average Score: 135.40 Episode 668 Average Score: 201.38 Environment solved in 568 episodes! Average Score: 201.38 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=10000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -109.12 Episode 200 Average Score: -26.804 Episode 300 Average Score: 107.84 Episode 400 Average Score: 191.02 Episode 414 Average Score: 200.96 Environment solved in 314 episodes! Average Score: 200.96 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent_solution import Agent #import agent.py action_size = env.action_space.n #pick environment action size state_size = env.observation_space.shape[0] #pick environment state size agent = Agent(state_size, action_size, seed=0) # initialise agent # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() gamma = 0.99 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") experiences = agent.memory.sample() states, actions, rewards, next_states, dones = experiences if True: # Construct one-step TD targets using target network with torch.no_grad(): next_action_values = agent.qnetwork_target(next_states).cpu().detach().numpy() next_q_values = np.max(next_action_values, axis=1) targets = [] for reward, next_q_value, done in zip(rewards, next_q_values, dones): targets.append(reward + gamma * (1-done) * next_q_value) # Compute MSE action_values = agent.qnetwork_local(states) t_actions = torch.tensor(actions, dtype=torch.long, device=device) q_sa_values = action_values.gather(1, t_actions.view(-1,1)) t_targets = torch.tensor(targets, device=device) loss = torch.mean(torch.square(q_sa_values - t_targets)) else: # Construct one-step TD targets using target network with torch.no_grad(): next_q_values = agent.qnetwork_target(next_states).detach().max(1)[0].unsqueeze(1) targets = rewards + gamma * (1 - dones) * next_q_values # Compute MSE action_values = agent.qnetwork_local(states) q_sa_preds = action_values.gather(1, actions) loss = torch.mean(torch.square(q_sa_preds - targets)) print(loss.item()) print(q_sa_values.cpu().detach().numpy().shape) print(q_sa_preds.cpu().detach().numpy().flatten()) ###Output [-33.743614 90.286316 -1.0548167 -1.5158374 34.196594 30.135489 21.827911 -8.624945 24.918343 11.704802 42.70224 1.2413999 15.209573 27.530087 0.24798045 26.878616 -6.3010716 10.561485 16.667194 -4.4810815 19.732782 63.996754 10.385481 0.26037377 -4.5030932 44.460773 4.044197 -6.473393 25.659065 53.310284 65.34961 7.9620175 23.317806 4.8133388 23.110073 -0.5656529 22.68634 -5.3441925 11.815998 47.00922 30.048939 19.919794 14.002056 7.371929 24.574413 13.956727 -0.94423777 40.63397 37.622684 21.648127 -1.1766562 39.068264 27.620682 30.79396 36.87444 7.4717574 39.382317 44.02532 1.5031552 91.680756 -0.86245614 13.63158 26.833729 23.956995 ] [-33.743614 90.286316 -1.0548167 -1.5158374 34.196594 30.135489 21.827911 -8.624945 24.918343 11.704802 42.70224 1.2413999 15.209573 27.530087 0.24798045 26.878616 -6.3010716 10.561485 16.667194 -4.4810815 19.732782 63.996754 10.385481 0.26037377 -4.5030932 44.460773 4.044197 -6.473393 25.659065 53.310284 65.34961 7.9620175 23.317806 4.8133388 23.110073 -0.5656529 22.68634 -5.3441925 11.815998 47.00922 30.048939 19.919794 14.002056 7.371929 24.574413 13.956727 -0.94423777 40.63397 37.622684 21.648127 -1.1766562 39.068264 27.620682 30.79396 36.87444 7.4717574 39.382317 44.02532 1.5031552 91.680756 -0.86245614 13.63158 26.833729 23.956995 ] ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=3000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -219.78 Episode 200 Average Score: -150.33 Episode 300 Average Score: -100.84 Episode 400 Average Score: -70.137 Episode 500 Average Score: -62.03 Episode 600 Average Score: -19.94 Episode 700 Average Score: 101.98 Episode 800 Average Score: 149.65 Episode 900 Average Score: 117.07 Episode 1000 Average Score: 127.44 Episode 1100 Average Score: 154.85 Episode 1200 Average Score: 98.303 Episode 1300 Average Score: 118.09 Episode 1400 Average Score: 120.12 Episode 1500 Average Score: 153.54 Episode 1600 Average Score: 78.605 Episode 1700 Average Score: 55.34 Episode 1800 Average Score: 77.72 Episode 1900 Average Score: 140.87 Episode 2000 Average Score: 134.60 Episode 2100 Average Score: 152.05 Episode 2179 Average Score: 201.61 Environment solved in 2079 episodes! Average Score: 201.61 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code %load_ext autoreload %autoreload 2 import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !apt-get install -y xvfb x11-utils python-box2d python-opengl swig > /dev/null 2>&1 !pip install gym pyvirtualdisplay box2d box2d-py > /dev/null 2>&1 from IPython import display as ipythondisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(400, 300)) display.start() ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -111.95 Episode 200 Average Score: -21.643 Episode 300 Average Score: 81.024 Episode 400 Average Score: 199.24 Episode 401 Average Score: 203.84 Environment solved in 301 episodes! Average Score: 203.84 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /opt/conda/lib/python3.6/site-packages (2.3.2) Requirement already satisfied: pyvirtualdisplay in /opt/conda/lib/python3.6/site-packages (1.3.2) Requirement already satisfied: EasyProcess in /opt/conda/lib/python3.6/site-packages (from pyvirtualdisplay) (0.3) ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -208.91 Episode 200 Average Score: -139.36 Episode 300 Average Score: -78.241 Episode 400 Average Score: -45.36 Episode 500 Average Score: -46.01 Episode 600 Average Score: 11.482 Episode 700 Average Score: 125.49 Episode 800 Average Score: 169.92 Episode 900 Average Score: 182.33 Episode 1000 Average Score: 187.33 Episode 1090 Average Score: 200.02 Environment solved in 990 episodes! Average Score: 200.02 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code %reset import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -183.04 Episode 200 Average Score: -98.230 Episode 300 Average Score: -41.98 Episode 400 Average Score: -4.286 Episode 500 Average Score: 60.65 Episode 600 Average Score: 125.72 Episode 700 Average Score: 196.29 Episode 714 Average Score: 201.15 Environment solved in 614 episodes! Average Score: 201.15 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(500): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output cuda:0 ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -168.47 Episode 200 Average Score: -114.07 Episode 300 Average Score: -21.722 Episode 400 Average Score: -24.35 Episode 500 Average Score: 34.967 Episode 600 Average Score: 105.08 Episode 700 Average Score: 70.539 Episode 800 Average Score: 62.56 Episode 900 Average Score: 81.17 Episode 1000 Average Score: 97.76 Episode 1100 Average Score: 130.19 Episode 1200 Average Score: 173.62 Episode 1229 Average Score: 202.22 Environment solved in 1129 episodes! Average Score: 202.22 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(1000): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt # %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) # env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(agent, n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== agent (Agent): agent to train n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) agent.save('checkpoint.pth', np.mean(scores_window)) break return scores try: scores = dqn(agent) except KeyboardInterrupt: agent.save('checkpoint.pth', 'N/A') else: # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -177.11 Episode 200 Average Score: -107.36 Episode 300 Average Score: -31.622 Episode 400 Average Score: -16.50 Episode 500 Average Score: 92.176 Episode 600 Average Score: 128.02 Episode 700 Average Score: 197.70 Episode 702 Average Score: 200.52 Environment solved in 602 episodes! Average Score: 200.52 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.load('checkpoint.pth') for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output Loaded checkpoint with the best avgerage score of 200.51539084486245 ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): # Sample action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) # Learn agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -162.82 Episode 200 Average Score: -132.97 Episode 300 Average Score: -52.032 Episode 400 Average Score: -23.06 Episode 500 Average Score: 46.637 Episode 527 Average Score: 32.76 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=300) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.median(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.median(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -194.43 Episode 200 Average Score: -150.35 Episode 300 Average Score: -112.41 Episode 400 Average Score: -60.63 Episode 500 Average Score: -32.66 Episode 600 Average Score: 0.75 Episode 700 Average Score: 2.57 Episode 800 Average Score: 18.53 Episode 900 Average Score: 10.88 Episode 1000 Average Score: 69.04 Episode 1100 Average Score: 123.53 Episode 1141 Average Score: 144.15 Environment solved in 1041 episodes! Average Score: 200.29 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output In C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The text.latex.preview rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The mathtext.fallback_to_cm rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: Support for setting the 'mathtext.fallback_to_cm' rcParam is deprecated since 3.3 and will be removed two minor releases later; use 'mathtext.fallback : 'cm' instead. In C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The validate_bool_maybe_none function was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The savefig.jpeg_quality rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The keymap.all_axes rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The animation.avconv_path rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The animation.avconv_args rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code %load_ext autoreload %autoreload 2 from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon best_score = -np.inf for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon score = np.mean(scores_window) print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, score), end="") if score > best_score and score > -50: torch.save(agent.qnetwork_local.state_dict(), 'models/checkpoint{}_{}.pth'.format(i_episode, score)) best_score = score if i_episode % 100 == 0: torch.save(agent.qnetwork_local.state_dict(), 'models/checkpoint{}_{}.pth'.format(i_episode, score)) print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, score)) if score >= 200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'models/checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -179.81 Episode 200 Average Score: -127.62 Episode 300 Average Score: -93.518 Episode 400 Average Score: -48.99 Episode 500 Average Score: 67.773 Episode 600 Average Score: 150.32 Episode 700 Average Score: 179.63 Episode 762 Average Score: 200.22 Environment solved in 662 episodes! Average Score: 200.22 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('models/checkpoint.pth')) for i in range(3): state = env.reset() for j in range(1000): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent # state = env.reset() # for j in range(200): # action = agent.act(state) # env.render() # state, reward, done, _ = env.step(action) # if done: # break # env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn(n_episodes=1) # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) torch.Size([64, 8]) next torch.Size([64, 1]) targets torch.Size([64, 1]) expected torch.Size([64, 1]) Episode 1 Average Score: -280.79 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break # env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline from pyvirtualdisplay import Display # display = Display(visible=0, size=(1400, 900)) # display.start() # is_ipython = 'inline' in plt.get_backend() # if is_ipython: # from IPython import display plt.ion() %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output cpu ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output u cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 40 Average Score: -159.51cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 41 Average Score: -157.87cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 42 Average Score: -156.07cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 43 Average Score: -157.97cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 44 Average Score: -158.45cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 45 Average Score: -156.82cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 46 Average Score: -157.42cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 47 Average Score: -156.57cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 48 Average Score: -155.58cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 49 Average Score: -155.00cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 50 Average Score: -153.75cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 51 Average Score: -152.91cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 52 Average Score: -152.05cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 53 Average Score: -153.29cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 54 Average Score: -152.70cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 55 Average Score: -154.62cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 56 Average Score: -156.17cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 57 Average Score: -154.88cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 58 Average Score: -153.26cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 59 Average Score: -152.30cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 60 Average Score: -151.24cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 61 Average Score: -155.52cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 62 Average Score: -156.57cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 63 Average Score: -156.66cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 64 Average Score: -155.40cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 65 Average Score: -154.54cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 66 Average Score: -152.43cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 67 Average Score: -151.27cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 68 Average Score: -150.65cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 69 Average Score: -152.16cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 70 Average Score: -150.87cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 71 Average Score: -150.00cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 72 Average Score: -149.51cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 73 Average Score: -148.95cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 74 Average Score: -148.78cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 75 Average Score: -149.27cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 76 Average Score: -148.26cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu Episode 77 Average Score: -146.10cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code %load_ext autoreload %autoreload 2 import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent from torch import nn import torch.optim as optim optim.Adam(params=) obj = nn.Module() for p in obj.parameters(): print(p) agent = Agent(action_size=4, state_size=8, seed=0) from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /opt/conda/lib/python3.6/site-packages You are using pip version 9.0.1, however version 19.0.3 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Requirement already satisfied: pyvirtualdisplay in /opt/conda/lib/python3.6/site-packages Requirement already satisfied: EasyProcess in /opt/conda/lib/python3.6/site-packages (from pyvirtualdisplay) You are using pip version 9.0.1, however version 19.0.3 is available. You should consider upgrading via the 'pip install --upgrade pip' command. ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes + 1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay * eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -196.41 Episode 200 Average Score: -139.61 Episode 300 Average Score: -92.625 Episode 400 Average Score: -81.60 Episode 500 Average Score: -99.891 Episode 600 Average Score: -10.76 Episode 700 Average Score: 9.4534 Episode 800 Average Score: 35.27 Episode 900 Average Score: 70.23 Episode 1000 Average Score: 99.50 Episode 1100 Average Score: 113.29 Episode 1200 Average Score: 138.72 Episode 1300 Average Score: 129.07 Episode 1400 Average Score: 115.30 Episode 1500 Average Score: 156.48 Episode 1600 Average Score: 173.16 Episode 1672 Average Score: 200.59 Environment solved in 1572 episodes! Average Score: 200.59 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth', map_location={'cuda:0': 'cpu'})) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -176.57 Episode 200 Average Score: -108.36 Episode 300 Average Score: -66.695 Episode 400 Average Score: -27.84 Episode 500 Average Score: 58.665 Episode 600 Average Score: 146.96 Episode 700 Average Score: 142.03 Episode 800 Average Score: 179.34 Episode 900 Average Score: 175.17 Episode 930 Average Score: 200.11 Environment solved in 830 episodes! Average Score: 200.11 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(500): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -165.13 Episode 200 Average Score: -116.71 Episode 300 Average Score: -60.25 Episode 400 Average Score: -36.73 Episode 500 Average Score: 93.90 Episode 600 Average Score: 164.33 Episode 683 Average Score: 200.71 Environment solved in 583 episodes! Average Score: 200.71 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /opt/conda/lib/python3.6/site-packages (2.3.2) Collecting pyvirtualdisplay Downloading https://files.pythonhosted.org/packages/cf/ad/b15f252bfb0f1693ad3150b55a44a674f3cba711cacdbb9ae2f03f143d19/PyVirtualDisplay-0.2.4-py2.py3-none-any.whl Collecting EasyProcess (from pyvirtualdisplay) Downloading https://files.pythonhosted.org/packages/fa/29/40040d1d64a224a5e44df9572794a66494618ffe5c77199214aeceedb8a7/EasyProcess-0.2.7-py2.py3-none-any.whl Installing collected packages: EasyProcess, pyvirtualdisplay Successfully installed EasyProcess-0.2.7 pyvirtualdisplay-0.2.4 ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -208.91 Episode 200 Average Score: -139.36 Episode 300 Average Score: -78.241 Episode 400 Average Score: -45.36 Episode 500 Average Score: -46.01 Episode 600 Average Score: 11.482 Episode 700 Average Score: 125.49 Episode 800 Average Score: 169.92 Episode 900 Average Score: 182.33 Episode 1000 Average Score: 187.33 Episode 1090 Average Score: 200.02 Environment solved in 990 episodes! Average Score: 200.02 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code %load_ext autoreload %autoreload 2 import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /opt/conda/lib/python3.6/site-packages (2.3.2) Collecting pyvirtualdisplay Downloading https://files.pythonhosted.org/packages/d0/8a/643043cc70791367bee2d19eb20e00ed1a246ac48e5dbe57bbbcc8be40a9/PyVirtualDisplay-1.3.2-py2.py3-none-any.whl Collecting EasyProcess (from pyvirtualdisplay) Downloading https://files.pythonhosted.org/packages/48/3c/75573613641c90c6d094059ac28adb748560d99bd27ee6f80cce398f404e/EasyProcess-0.3-py2.py3-none-any.whl Installing collected packages: EasyProcess, pyvirtualdisplay Successfully installed EasyProcess-0.3 pyvirtualdisplay-1.3.2 ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) hist = [] # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) exp = (state, action) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) exp = exp + (reward,) hist.append(exp) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -153.82 Episode 200 Average Score: -38.647 Episode 300 Average Score: 59.939 Episode 400 Average Score: 153.35 Episode 500 Average Score: 185.88 Episode 564 Average Score: 200.26 Environment solved in 464 episodes! Average Score: 200.26 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code pathname = 'checkpoint.pth' if torch.cuda.is_available(): map_location=lambda storage, loc: storage.cuda() else: map_location='cpu' checkpoint = torch.load(pathname, map_location=map_location) # load the weights from file agent.qnetwork_local.load_state_dict(checkpoint) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) agent.save('checkpoint.ckpt') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.load('checkpoint.ckpt') for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=1000) state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -140.26 Episode 200 Average Score: -100.40 Episode 300 Average Score: -2.515 Episode 400 Average Score: 52.07 Episode 500 Average Score: 70.33 Episode 600 Average Score: 103.79 Episode 700 Average Score: 184.14 Episode 733 Average Score: 200.47 Environment solved in 633 episodes! Average Score: 200.47 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) score = 0 scores = [] for i in range(500): state = env.reset() for j in range(400): action = agent.act(state) #env.render() state, reward, done, _ = env.step(action) score += reward if reward is not None else 0 if done: break scores.append(score) score = 0 env.close() fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() scores = np.array(scores) scores.mean() scores.std() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /opt/conda/lib/python3.6/site-packages (2.3.2) Requirement already satisfied: pyvirtualdisplay in /opt/conda/lib/python3.6/site-packages (0.2.5) Requirement already satisfied: EasyProcess in /opt/conda/lib/python3.6/site-packages (from pyvirtualdisplay) (0.3) ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() scores = dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995) # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth', map_location=lambda storage, loc: storage)) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -156.83 Episode 200 Average Score: -84.787 Episode 300 Average Score: -62.93 Episode 400 Average Score: -8.512 Episode 500 Average Score: 113.83 Episode 591 Average Score: 201.52 Environment solved in 491 episodes! Average Score: 201.52 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) env.step(1) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline print(gym.__version__) print(torch.__version__) ###Output 0.9.6 0.4.1 ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) import time # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() time.sleep(0.02) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=3000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tA5verage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -218.65 Episode 200 Average Score: -165.58 Episode 300 Average Score: -91.612 Episode 400 Average Score: -42.45 Episode 492 Average Score: 102.40 Environment solved in 392 episodes! Average Score: 102.40 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() time.sleep(0.02) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code #from dqn_agent import Agent from dqn_agent0 import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon #agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}\tPrio B : {}'.format(i_episode, np.mean(scores_window),agent.prio_b), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=230.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn(max_t=500) # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -153.65 Prio B : 0.41462499999993346 Episode 200 Average Score: -94.59 Prio B : 0.443274999999803153 Episode 300 Average Score: 16.30 Prio B : 0.494604999999569566 Episode 400 Average Score: 36.36 Prio B : 0.55390499999989816 Episode 500 Average Score: 33.16 Prio B : 0.6111900000002733 Episode 600 Average Score: 44.19 Prio B : 0.6554100000005638 Episode 700 Average Score: 71.14 Prio B : 0.6914900000007994 Episode 800 Average Score: 122.21 Prio B : 0.7281600000010396 Episode 900 Average Score: 196.69 Prio B : 0.7668050000012928 Episode 1000 Average Score: 177.48 Prio B : 0.8017650000015218 Episode 1100 Average Score: 190.91 Prio B : 0.8362750000017479 Episode 1200 Average Score: 206.91 Prio B : 0.8712850000019773 Episode 1300 Average Score: 202.10 Prio B : 0.9027700000021835 Episode 1400 Average Score: 211.90 Prio B : 0.9338850000023874 Episode 1484 Average Score: 231.89 Prio B : 0.9605100000025618 Environment solved in 1384 episodes! Average Score: 231.89 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(20): state = env.reset() for j in range(500): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output /usr/local/Caskroom/miniconda/base/envs/learnai/lib/python3.6/site-packages/gym/envs/registration.py:14: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. result = entry_point.load(False) ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code !pip install pyglet==1.5.11 import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output Requirement already satisfied: box2d in /Users/nin/opt/anaconda3/lib/python3.8/site-packages (2.3.10) ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -170.98 Episode 200 Average Score: -87.722 Episode 300 Average Score: -51.76 Episode 400 Average Score: -15.79 Episode 500 Average Score: 32.229 Episode 600 Average Score: 105.39 Episode 700 Average Score: 118.80 Episode 800 Average Score: 175.24 Episode 900 Average Score: 148.29 Episode 1000 Average Score: 161.34 Episode 1100 Average Score: 118.73 Episode 1198 Average Score: 200.48 Environment solved in 1098 episodes! Average Score: 200.48 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline torch.cuda.is_available() device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print("found this devide" , device) ###Output found this devide cuda:0 ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below.Action space (Discrete) 0- Do nothing 1- Fire left engine 2- Fire down engine 3- Fire right engine Landing pad is always at coordinates (0,0). Coordinates are the first two numbers in state vector. Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back. Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Solved is 200 points. Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output found this devide cuda:0 ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! Notes: the most confusing thing here is that we are training the local model, but there is a target model which plays a confusing role. the target model gets updated once in a while to the local model in via a weighted average method. the local model is updated everytime we compute the loss. what we are doing is to training the neural net model so that the difference between local_state_action_value and target_next_state_best_action_value becomes closer and closer to reward for all observations. I feel the key is that we observe the reward. Ideally you want to update the target as soon as you update the local model parameters, but i think this results in numerical/optimization instability. That is why the target model is updated only after ###Code def dqn(n_episodes=1000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(500): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /home/ferenc/anaconda3/envs/drlnd/lib/python3.6/site-packages (2.3.10) Requirement already satisfied: pyvirtualdisplay in /home/ferenc/anaconda3/envs/drlnd/lib/python3.6/site-packages (1.3.2) Requirement already satisfied: EasyProcess in /home/ferenc/anaconda3/envs/drlnd/lib/python3.6/site-packages (from pyvirtualdisplay) (0.3) ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code torch.device("cuda:0" if torch.cuda.is_available() else "cpu") def dqn(n_episodes=6000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint_20200629a_slvd.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() torch.save(agent.qnetwork_local.state_dict(), 'checkpoint_20200629a.pth') ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # FA The network from my GPU training run did not load # Thx 2: https://stackoverflow.com/a/55759312 if torch.cuda.is_available(): map_location=lambda storage, loc: storage.cuda() else: map_location='cpu' print('map_location = ', map_location) # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint_20200629a.pth', map_location=map_location)) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(400): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) # agent pick an action next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) # agent update action values state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -188.32 Episode 200 Average Score: -134.38 Episode 300 Average Score: -61.41 Episode 400 Average Score: -7.88 Episode 500 Average Score: 14.75 Episode 600 Average Score: 100.31 Episode 700 Average Score: 148.46 Episode 800 Average Score: 193.58 Episode 841 Average Score: 200.33 Environment solved in 741 episodes! Average Score: 200.33 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() while True: action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Table of Contents1&nbsp;&nbsp;Deep Q-Network (DQN)1.0.1&nbsp;&nbsp;1. Import the Necessary Packages1.0.2&nbsp;&nbsp;2. Instantiate the Environment and Agent1.0.3&nbsp;&nbsp;3. Train the Agent with DQN1.0.4&nbsp;&nbsp;4. Watch a Smart Agent!1.0.5&nbsp;&nbsp;5. Explore Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code !pip install Cython !pip install gym[all] env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -173.85 Episode 200 Average Score: -113.61 Episode 300 Average Score: -110.62 Episode 400 Average Score: 14.6284 Episode 500 Average Score: 95.31 Episode 600 Average Score: 122.11 Episode 675 Average Score: 200.72 Environment solved in 575 episodes! Average Score: 200.72 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -145.00 Episode 200 Average Score: -84.284 Episode 300 Average Score: -15.36 Episode 400 Average Score: 32.384 Episode 500 Average Score: 124.72 Episode 600 Average Score: 176.02 Episode 647 Average Score: 200.86 Environment solved in 547 episodes! Average Score: 200.86 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -166.87 Episode 200 Average Score: -114.55 Episode 300 Average Score: -32.913 Episode 400 Average Score: -23.08 Episode 500 Average Score: 58.531 Episode 600 Average Score: 141.74 Episode 700 Average Score: 171.75 Episode 757 Average Score: 200.01 Environment solved in 657 episodes! Average Score: 200.01 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) state = env.reset() print(state) ###Output [-5.91564178e-04 9.42304904e-01 -5.99357188e-02 1.12770955e-01 6.92289264e-04 1.35763153e-02 0.00000000e+00 0.00000000e+00] ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -202.16 Episode 200 Average Score: -161.44 Episode 300 Average Score: -112.82 Episode 400 Average Score: -66.755 Episode 500 Average Score: -32.34 Episode 600 Average Score: 43.891 Episode 700 Average Score: 18.147 Episode 800 Average Score: 117.13 Episode 900 Average Score: 147.81 Episode 1000 Average Score: 174.87 Episode 1100 Average Score: 179.89 Episode 1133 Average Score: 200.12 Environment solved in 1033 episodes! Average Score: 200.12 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -191.98 Episode 200 Average Score: -162.27 Episode 300 Average Score: -119.96 Episode 400 Average Score: -92.128 Episode 500 Average Score: -70.24 Episode 600 Average Score: -53.46 Episode 700 Average Score: 132.56 Episode 800 Average Score: 77.959 Episode 900 Average Score: 144.15 Episode 1000 Average Score: 167.96 Episode 1100 Average Score: 198.25 Episode 1110 Average Score: 200.45 Environment solved in 1010 episodes! Average Score: 200.45 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /opt/conda/lib/python3.6/site-packages (2.3.2) Requirement already satisfied: pyvirtualdisplay in /opt/conda/lib/python3.6/site-packages (1.3.2) Requirement already satisfied: EasyProcess in /opt/conda/lib/python3.6/site-packages (from pyvirtualdisplay) (0.3) ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent if False: agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) scores = [] for i in range(3): score = 0. state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) score += reward if done: break scores.append(score) env.close() print(scores) ###Output [81.287976298780649, 41.157798479697632, 222.35589809964495] ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline MODEL_NAME = "distributional" FILE_NUM = 0 ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code # random play score scores = [] for n in range(100): score = 0 state = env.reset() for j in range(1000): state, reward, done, _ = env.step(env.action_space.sample()) score += reward if done: break scores.append(score) print("Random Score: {} +- {} ({} trials)".format(np.mean(scores), np.std(scores), len(scores))) from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() score = 0 for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) score += reward if done: break env.close() print("Score:", score) ###Output Score: -420.0730482249217 ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995, a_start=0., a_end=1.0, beta_start=0., beta_end=1.0, continue_after_solved=True, save_name="checkpoint_dueling_solved.pth"): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon prioritized = hasattr(agent, 'beta') # if using prioritized experience replay, initialize beta if prioritized: print("Priority Used") agent.a = a_start agent.beta = beta_start a_increment = (a_end - a_start) / n_episodes beta_increment = (beta_end - beta_start) / n_episodes else: print("Priority Not Used") solved = False epi_str_max_len = len(str(n_episodes)) for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon if prioritized: agent.a = min(a_end, agent.a + a_increment) agent.beta = min(beta_end, agent.beta + beta_increment) print('\rEpisode {:>{epi_max_len}d} | Current Score: {:>7.2f} | Average Score: {:>7.2f} | Epsilon: {:>6.4f}'\ .format(i_episode, score, np.mean(scores_window), eps, epi_max_len=epi_str_max_len), end="") if prioritized: print(' | A: {:>6.4f} | Beta: {:>6.4f}'.format(agent.a, agent.beta), end='') print(' ', end='') if i_episode % 100 == 0: print('\rEpisode {:>{epi_max_len}} | Current Score: {:>7.2f} | Average Score: {:>7.2f} | Epsilon: {:>6.4f}'\ .format(i_episode, score, np.mean(scores_window), eps, epi_max_len=epi_str_max_len), end='') if prioritized: print(' | A: {:>6.4f} | Beta: {:>6.4f}'.format(agent.a, agent.beta), end='') print(' ') if not solved and np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), save_name) solved = True if not continue_after_solved: break return scores scores = dqn(n_episodes=3000, max_t=1000, eps_start=0., eps_end=0., eps_decay=0., a_start=0.8, a_end=0.8, beta_start=0.0, beta_end=1.0, continue_after_solved=True, save_name="checkpoint_{}_solved{}.pth".format(MODEL_NAME, FILE_NUM)) # plot the scores plt.rcParams['figure.facecolor'] = 'w' fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() torch.save(agent.qnetwork_local.state_dict(), "checkpoint_{}_final{}.pth".format(MODEL_NAME, FILE_NUM)) agent.qnetwork_local.load_state_dict(torch.load("checkpoint_{}_final{}.pth".format(MODEL_NAME, FILE_NUM))) ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code if hasattr(agent, 'noise'): agent.noise(False) for i in range(10): state = env.reset() score = 0 for j in range(1000): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) score += reward if done: break print("Game {} Score: {} in {} steps".format(i, score, j + 1)) if hasattr(agent, 'noise'): agent.noise(True) env.close() ###Output Game 0 Score: 249.98635359973989 in 166 steps Game 1 Score: 188.83143334284688 in 1000 steps Game 2 Score: 273.1654934029272 in 221 steps Game 3 Score: 244.99494942745116 in 171 steps Game 4 Score: 272.1399594288923 in 200 steps Game 5 Score: 192.10919093304733 in 350 steps Game 6 Score: 270.2023976942239 in 169 steps Game 7 Score: 293.39158700312123 in 210 steps Game 8 Score: 259.39039660129225 in 167 steps Game 9 Score: 242.49702623279921 in 191 steps ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from model import QNetwork model = QNetwork(state_size=8, action_size=2, seed=123) model.fc1.weight.data.fill_(2), model.fc1.bias.data.fill_(0) x = torch.ones((2,8)) x[1,:]=2 x model.fc1.weight, model.fc1.bias model.forward(x) from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) # if np.mean(scores_window)>=200.0: # print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) # torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') # break torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -112.18 Episode 200 Average Score: -38.76 Episode 300 Average Score: 31.23 Episode 400 Average Score: 87.55 Episode 500 Average Score: 192.87 Episode 600 Average Score: 201.85 Episode 700 Average Score: 206.87 Episode 800 Average Score: 217.77 Episode 900 Average Score: 209.90 Episode 1000 Average Score: 192.08 Episode 1100 Average Score: 212.00 Episode 1200 Average Score: 233.01 Episode 1300 Average Score: 223.77 Episode 1400 Average Score: 233.66 Episode 1500 Average Score: 236.82 Episode 1600 Average Score: 224.43 Episode 1700 Average Score: 241.93 Episode 1800 Average Score: 234.88 Episode 1900 Average Score: 239.75 Episode 2000 Average Score: 227.86 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /opt/conda/lib/python3.6/site-packages (2.3.2) Collecting pyvirtualdisplay Downloading https://files.pythonhosted.org/packages/d0/8a/643043cc70791367bee2d19eb20e00ed1a246ac48e5dbe57bbbcc8be40a9/PyVirtualDisplay-1.3.2-py2.py3-none-any.whl Collecting EasyProcess (from pyvirtualdisplay) Downloading https://files.pythonhosted.org/packages/48/3c/75573613641c90c6d094059ac28adb748560d99bd27ee6f80cce398f404e/EasyProcess-0.3-py2.py3-none-any.whl Installing collected packages: EasyProcess, pyvirtualdisplay Successfully installed EasyProcess-0.3 pyvirtualdisplay-1.3.2 ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -264.17 Episode 200 Average Score: -290.40 Episode 300 Average Score: -184.98 Episode 400 Average Score: -134.66 Episode 500 Average Score: -95.104 Episode 600 Average Score: -87.60 Episode 700 Average Score: -78.87 Episode 800 Average Score: -61.81 Episode 900 Average Score: -51.60 Episode 1000 Average Score: 32.14 Episode 1100 Average Score: 98.303 Episode 1200 Average Score: 120.66 Episode 1300 Average Score: 110.10 Episode 1400 Average Score: 120.43 Episode 1500 Average Score: 63.764 Episode 1600 Average Score: 168.30 Episode 1700 Average Score: 177.77 Episode 1800 Average Score: 185.09 Episode 1900 Average Score: 176.42 Episode 2000 Average Score: 191.45 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -242.63 Episode 200 Average Score: -293.05 Episode 300 Average Score: -260.48 Episode 400 Average Score: -247.90 Episode 500 Average Score: -232.12 Episode 600 Average Score: -280.34 Episode 700 Average Score: -260.91 Episode 800 Average Score: -231.58 Episode 900 Average Score: -201.56 Episode 1000 Average Score: -322.61 Episode 1100 Average Score: -225.29 Episode 1200 Average Score: -194.12 Episode 1300 Average Score: -131.85 Episode 1400 Average Score: -368.48 Episode 1500 Average Score: -571.93 Episode 1600 Average Score: -593.36 Episode 1700 Average Score: -457.95 Episode 1800 Average Score: -636.06 Episode 1900 Average Score: -524.62 Episode 2000 Average Score: -612.31 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip3 install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Requirement already satisfied: box2d in /opt/conda/lib/python3.6/site-packages You are using pip version 9.0.1, however version 18.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Collecting pyvirtualdisplay Downloading https://files.pythonhosted.org/packages/39/37/f285403a09cc261c56b6574baace1bdcf4b8c7428c8a7239cbba137bc0eb/PyVirtualDisplay-0.2.1.tar.gz Collecting EasyProcess (from pyvirtualdisplay) Downloading https://files.pythonhosted.org/packages/45/3a/4eecc0c7995a13a64739bbedc0d3691fc574245b7e79cff81905aa0c2b38/EasyProcess-0.2.5.tar.gz Building wheels for collected packages: pyvirtualdisplay, EasyProcess Running setup.py bdist_wheel for pyvirtualdisplay ... [?25ldone [?25h Stored in directory: /root/.cache/pip/wheels/d1/8c/16/1c64227974ae29c687e4cc30fd691d5c0fd40f54446dde99da Running setup.py bdist_wheel for EasyProcess ... [?25ldone [?25h Stored in directory: /root/.cache/pip/wheels/41/22/19/af15ef6264c58b625a82641ed7483ad05e258fbd8925505227 Successfully built pyvirtualdisplay EasyProcess Installing collected packages: EasyProcess, pyvirtualdisplay Successfully installed EasyProcess-0.2.5 pyvirtualdisplay-0.2.1 You are using pip version 9.0.1, however version 18.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code %load_ext autoreload %autoreload 2 from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -208.91 Episode 200 Average Score: -139.36 Episode 300 Average Score: -78.241 Episode 400 Average Score: -45.36 Episode 500 Average Score: -46.01 Episode 600 Average Score: 11.482 Episode 700 Average Score: 125.49 Episode 800 Average Score: 169.92 Episode 900 Average Score: 182.33 Episode 1000 Average Score: 187.33 Episode 1090 Average Score: 200.02 Environment solved in 990 episodes! Average Score: 200.02 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(3): state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code %load_ext autoreload %autoreload 1 %aimport dqn_agent agent = dqn_agent.Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) if i_episode % 100 == 0: env.render() next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -98.99 Episode 200 Average Score: -32.08 Episode 300 Average Score: 67.224 Episode 388 Average Score: 200.32 Environment solved in 288 episodes! Average Score: 200.32 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(1000): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym !pip install box2d import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline !python -m pip install pyvirtualdisplay from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output Collecting box2d Using cached https://files.pythonhosted.org/packages/cc/7b/ddb96fea1fa5b24f8929714ef483f64c33e9649e7aae066e5f5023ea426a/Box2D-2.3.2.tar.gz Building wheels for collected packages: box2d Building wheel for box2d (setup.py) ... [?25ldone [?25h Stored in directory: /Users/ashdasstooie/Library/Caches/pip/wheels/35/09/fd/054e73da7184a08071ed889bf45772719c7bb6d2dd13f166a1 Successfully built box2d Installing collected packages: box2d Successfully installed box2d-2.3.2 ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() import numpy as np hidden_layers = np.array([64,84]) hidden_layers[:-1] ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code print(env.reset()) from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0, fc_sizes=[256,128,64,32]) # watch an untrained agent for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=3000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) # if np.mean(scores_window)>=200.0: # print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) # torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') # break return scores torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -210.51 Episode 200 Average Score: -134.47 Episode 300 Average Score: -102.38 Episode 400 Average Score: -46.857 Episode 500 Average Score: 15.138 Episode 600 Average Score: 93.838 Episode 700 Average Score: 124.17 Episode 800 Average Score: 168.55 Episode 900 Average Score: 199.76 Episode 1000 Average Score: 200.66 Episode 1100 Average Score: 196.12 Episode 1200 Average Score: 174.62 Episode 1300 Average Score: 185.35 Episode 1400 Average Score: 194.20 Episode 1500 Average Score: 199.96 Episode 1600 Average Score: 190.66 Episode 1700 Average Score: 184.27 Episode 1800 Average Score: 203.22 Episode 1900 Average Score: 210.24 Episode 2000 Average Score: 205.63 Episode 2100 Average Score: 210.40 Episode 2200 Average Score: 223.86 Episode 2300 Average Score: 228.53 Episode 2400 Average Score: 235.66 Episode 2500 Average Score: 232.74 Episode 2600 Average Score: 230.38 Episode 2700 Average Score: 231.05 Episode 2800 Average Score: 231.41 Episode 2900 Average Score: 225.52 Episode 3000 Average Score: 206.43 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN) & Double-DQN---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline import time ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. See [`Lunar Lander`](https://gym.openai.com/envs/LunarLander-v2/) and [`CS221`](https://stanford-cs221.github.io/autumn2019-extra/posters/113.pdf) for more detaisl. Some information about the environment from the gym homepage* The lander maneuvers by engaging thrusters (with a noisy outcome) and consuming fuel.* **State has 8 components**: horizontal and vertical position, horizontal and vertical velocity, angle and angular velocity, and left and right leg contact.* Control agent can take **four actions** (i) do nothing, (ii) fire main engine (push up), (iii) fire left engine (push right), and (iv) fire right engine (push left)* Vehicle starts from the top of the screen (with random initial velocity) and landing pad is always at coordinates (0,0)* Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back. Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Firing side engine is -0.03 points each frame. Solved is 200 points. * Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt. Please see source code for details. ###Code #LunarLander-v2 -> set the mean score value to >= 200 env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) # print('State shape: ', env.observation_space.shape[0]) obs_space_size = env.observation_space.shape[0] act_space_size = env.action_space.n ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) Implementing the Deep-Q-Network according [`Mnih et al., 2015`](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf).Implementing the Double-Deep-Q-Network according [`van Hasselt et al., 2015`](https://arxiv.org/pdf/1509.06461.pdf). ###Code # Deep-Q-Network implementation # from dqn_agent import Agent # Doubl-Deep-Q-Network implementation from double_dqn_agent import Agent # agent = Agent(state_size=8, action_size=4, seed=0) agent = Agent(state_size=obs_space_size, action_size=act_space_size, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) # print(action) env.render() time.sleep(0.05) state, reward, done, _ = env.step(action) # print('state', state) # print('reward', reward) # print('done', done) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) # # return value(s) for debug purposes # ret = agent.step(state, action, reward, next_state, done) # if ret != None: # print(ret) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(400): action = agent.act(state) # print('action', action) env.render() time.sleep(0.05) state, reward, done, _ = env.step(action) # print('state', state) # print('reward', reward) if done: print('Steps: ', j) break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() is_ipython = 'inline' in plt.get_backend() if is_ipython: from IPython import display plt.ion() ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State space:', env.observation_space) print('State shape: ', env.observation_space.shape) print('Action space:', env.action_space) print('Number of actions: ', env.action_space.n) ###Output State space: Box(8,) State shape: (8,) Action space: Discrete(4) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() img = plt.imshow(env.render(mode='rgb_array')) for j in range(200): action = agent.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -175.08 Episode 200 Average Score: -115.35 Episode 300 Average Score: -47.70 Episode 400 Average Score: -25.04 Episode 500 Average Score: -8.38 Episode 600 Average Score: 75.15 Episode 700 Average Score: 165.78 Episode 800 Average Score: 180.93 Episode 881 Average Score: 202.48 Environment solved in 781 episodes! Average Score: 202.48 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) fig = plt.figure(figsize=(20, 20)) for i in range(5): state = env.reset() ax = fig.add_subplot(5, 1, i+1) sub_plot = ax.imshow(env.render(mode='rgb_array')) for j in range(1000): action = agent.act(state) sub_plot.set_data(env.render(mode='rgb_array')) ax.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -175.08 Episode 200 Average Score: -112.76 Episode 300 Average Score: -55.254 Episode 400 Average Score: -31.70 Episode 500 Average Score: -10.43 Episode 600 Average Score: 141.10 Episode 700 Average Score: 191.81 Episode 800 Average Score: 173.10 Episode 860 Average Score: 202.35 Environment solved in 760 episodes! Average Score: 202.35 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent import time agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -181.33 Episode 200 Average Score: -109.26 Episode 300 Average Score: -20.978 Episode 400 Average Score: 2.6764 Episode 500 Average Score: 104.73 Episode 600 Average Score: 109.87 Episode 700 Average Score: 146.53 Episode 800 Average Score: 127.29 Episode 900 Average Score: 169.88 Episode 1000 Average Score: 198.56 Episode 1018 Average Score: 200.48 Environment solved in 918 episodes! Average Score: 200.48 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -172.33 Episode 200 Average Score: -93.058 Episode 300 Average Score: -56.44 Episode 400 Average Score: 31.472 Episode 500 Average Score: 111.24 Episode 600 Average Score: 172.85 Episode 685 Average Score: 200.54 Environment solved in 585 episodes! Average Score: 200.54 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -205.22 Episode 200 Average Score: -195.68 Episode 300 Average Score: -117.05 Episode 400 Average Score: -53.729 Episode 500 Average Score: -8.824 Episode 600 Average Score: 59.49 Episode 700 Average Score: 150.37 Episode 800 Average Score: 136.40 Episode 900 Average Score: 149.20 Episode 1000 Average Score: 136.96 Episode 1100 Average Score: 126.36 Episode 1200 Average Score: 161.75 Episode 1278 Average Score: 200.50 Environment solved in 1178 episodes! Average Score: 200.50 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): # get next best action for current policy action = agent.act(state, eps) # act this action next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -161.94 Episode 200 Average Score: -120.58 Episode 300 Average Score: -84.159 Episode 400 Average Score: -31.38 Episode 500 Average Score: 94.443 Episode 593 Average Score: 200.62 Environment solved in 493 episodes! Average Score: 200.62 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(10): state = env.reset() for j in range(1000): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output _____no_output_____ ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -166.87 Episode 200 Average Score: -88.783 Episode 300 Average Score: -49.98 Episode 400 Average Score: -25.24 Episode 500 Average Score: 83.562 Episode 600 Average Score: 193.78 Episode 614 Average Score: 200.12 Environment solved in 514 episodes! Average Score: 200.12 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -148.12 Episode 200 Average Score: -87.310 Episode 300 Average Score: -36.67 Episode 400 Average Score: -49.29 Episode 500 Average Score: -11.91 Episode 600 Average Score: 49.123 Episode 700 Average Score: 66.56 Episode 800 Average Score: 72.35 Episode 900 Average Score: 48.82 Episode 994 Average Score: 52.33 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -120.41 Episode 200 Average Score: -55.667 Episode 300 Average Score: 26.432 Episode 400 Average Score: 191.49 Episode 404 Average Score: 200.42 Environment solved in 304 episodes! Average Score: 200.42 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: print('reward:', reward) break env.close() ###Output _____no_output_____ ###Markdown Deep Q-Network (DQN)---In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment. 1. Import the Necessary Packages ###Code import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 2. Instantiate the Environment and AgentInitialize the environment in the code cell below. ###Code env = gym.make('LunarLander-v2') env.seed(0) print('State shape: ', env.observation_space.shape) print('Number of actions: ', env.action_space.n) ###Output State shape: (8,) Number of actions: 4 ###Markdown Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together, - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!- Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._) ###Code from dqn_agent import Agent agent = Agent(state_size=8, action_size=4, seed=0) # watch an untrained agent state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____ ###Markdown 3. Train the Agent with DQNRun the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance! ###Code def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): state = env.reset() score = 0 for t in range(max_t): action = agent.act(state, eps) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=200.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window))) torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth') break return scores scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output Episode 100 Average Score: -170.19 Episode 200 Average Score: -96.741 Episode 300 Average Score: -29.28 Episode 400 Average Score: 53.873 Episode 500 Average Score: 101.99 Episode 600 Average Score: 115.25 Episode 700 Average Score: 153.77 Episode 795 Average Score: 201.49 Environment solved in 695 episodes! Average Score: 201.49 ###Markdown 4. Watch a Smart Agent!In the next code cell, you will load the trained weights from file to watch a smart agent! ###Code # load the weights from file agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth')) for i in range(5): state = env.reset() for j in range(200): action = agent.act(state) env.render() state, reward, done, _ = env.step(action) if done: break env.close() ###Output _____no_output_____
scripts_and_code/scripts/fix_agents_and_synsets.ipynb
###Markdown Synsets ###Code lines = [] with open('synsets.ar.txt' , 'r') as finput : for i in finput : lines.append(i) lines[:10] lines2 = [] for i in lines : if i[0] == '&': lines2.append(i) else : lines2.append(master_reconstruct_input(i,0)) lines2[:10] with open('synsets.ar.v2.txt', 'w') as output : for i in lines2: output.write(i) ###Output _____no_output_____
Heart_Disease_Risk_Prediction_Model_Final_Best.ipynb
###Markdown Problem DefinitionGiven clinical parameters about a patient, can we predict whether or not they have heart disease? FeaturesExplanation of fields in dataset Data Dictionary1. `age` - age in years2. `sex` - (1 = male; 0 = female)3. `cp` - chest pain type * 0: Typical angina * 1: Atypical angina * 2: Non-anginal pain * 3: Asymptomatic4. `trestbps` - resting blood pressure (in mm Hg on admission to the hospital) 5. `chol` - Serum cholesterole in mg/dl6. `fbs` - (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)7. `restecg` - resting electrocardiographic results * 0: Nothing to note * 1: ST-T Wave abnormality * 2: Possible or definite left ventricular hypertrophy8. `thalach` - maximum heart rate achieved9. `exang` - exercise induced angina (1 = yes; 0 = no)10. `oldpeak` - ST depression induced by exercise relative to rest looks at stress of heart during excercise unhealthy heart will stress more11. `slope` - the slope of the peak exercise ST segment * 0: Upsloping: better heart rate with excercise (uncommon) * 1: Flatsloping: minimal change (typical healthy heart) * 2: Downslopins: signs of unhealthy heart12. `ca` - number of major vessels (0-3) colored by flourosopy * colored vessel means the doctor can see the blood passing through * the more blood movement the better (no clots)13. `thal` - thalium stress result * 1,3: normal * 6: fixed defect: used to be defect but ok now * 7: reversable defect: no proper blood movement when excercising14. `target` - have disease or not (1=yes, 0=no) (= the predicted attribute) IntroductionFirst, load the appropriate libraries. ###Code !pip install -q seaborn !pip install -q git+https://github.com/tensorflow/docs import pathlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from scipy import stats import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from keras.utils.generic_utils import get_custom_objects print("Tensorflow sürümü:",tf.__version__) import tensorflow_docs as tfdocs import tensorflow_docs.plots import tensorflow_docs.modeling dataset_path = "heart.csv" column_names = ["Age","Gender","Angina","Rest_BP","Cholesterole","Fasting_BS","ECG","Stress_BPM","SI_Angina","Stress_STDep","Slope", "Colored_Vessels","Thalium","Diagnose"] raw_dataset = pd.read_csv(dataset_path, names=column_names, comment='\t', sep=",", skipinitialspace=True) df= raw_dataset.copy() df.head() #with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also # print(df) df.info() df['Gender'] = df['Gender'].map(lambda x: {0: 'Female', 1: 'Male'}.get(x)) df['Angina'] = df['Angina'].map(lambda x: {0: 'Angina', 1: 'Atypical_Angina', 2: 'Non-Anginal'}.get(x)) df['Slope'] = df['Slope'].map(lambda x: {0: 'Upsloping', 1: 'Flatsloping', 2: 'Downsloping'}.get(x)) df.pop("Thalium") #df['Thalium'] = df['Thalium'].map(lambda x: {6: 'Thalium_Fixed', 7: 'Thalium_Reversable'}.get(x)) df = pd.get_dummies(df, prefix='', prefix_sep='') df.head() train_dataset = df.sample(frac=0.80,random_state=0) test_dataset = df.drop(train_dataset.index) sns.pairplot(train_dataset[["Age", "Cholesterole", "Stress_BPM", "Rest_BP"]], diag_kind="kde") train_stats = train_dataset.describe() train_stats.pop("Diagnose") train_stats = train_stats.transpose() train_stats train_labels = train_dataset.pop('Diagnose') test_labels = test_dataset.pop('Diagnose') # Normalize Data def norm(x): return (x - train_stats['mean']) / train_stats['std'] normed_train_data = norm(train_dataset) normed_test_data = norm(test_dataset) ###Output _____no_output_____ ###Markdown Building Model ###Code def build_model(): model = keras.Sequential([ layers.Dense(64, activation='tanh', input_shape=[len(train_dataset.keys())]), layers.Dense(36, activation='tanh'), layers.Dense(18, activation='tanh'), layers.Dense(1, activation='sigmoid'), ]) optimizer = tf.keras.optimizers.RMSprop(0.001) tf.keras.metrics.BinaryAccuracy( name="binary_accuracy", dtype=None, threshold=0.6 ) model.compile(loss='mse', optimizer=optimizer, metrics=[tf.keras.metrics.BinaryAccuracy()]) return model model = build_model() model.summary() !pip install visualkeras import visualkeras visualkeras.layered_view(model, legend=True) # font is optional! ###Output _____no_output_____ ###Markdown Try short batch ###Code example_batch = normed_train_data[:10] example_result = model.predict(example_batch) example_result EPOCHS = 500 history = model.fit( normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[tfdocs.modeling.EpochDots()]) hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch hist.tail() plotter = tfdocs.plots.HistoryPlotter(smoothing_std=2) plotter.plot({'Basic': history}, metric = "binary_accuracy") plt.ylim([0, 1]) plt.ylabel('Binary Accuracy') model = build_model() # The patience parameter is the amount of epochs to check for improvement early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=20) early_history = model.fit(normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[early_stop, tfdocs.modeling.EpochDots()]) plotter.plot({'Early Stopping': early_history}, metric = "binary_accuracy") plt.ylim([0, 1]) plt.ylabel('Binary Accuracy') loss, mae = model.evaluate(normed_test_data, test_labels, verbose=2) print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae)) test_predictions = model.predict(normed_test_data).flatten() test_predictions error = test_predictions - test_labels plt.hist(error, bins = 100) plt.xlabel("Prediction Error [MPG]") _ = plt.ylabel("Count") print(np.mean(error)) print(np.std(error)) print(len(error)) model.save('trained_model.h5') test_dataset_merged = pd.DataFrame(test_labels, columns=['Diagnose']) #test_dataset_merged['Diagnose'] = test_labels test_dataset_merged['Prediction'] = test_predictions with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also print(test_dataset_merged) ###Output Diagnose Prediction 1 1 0.908432 9 1 0.939467 17 1 0.745600 25 1 0.736077 28 1 0.902405 31 1 0.799601 32 1 0.958100 35 1 0.867871 38 1 0.978289 39 1 0.938237 42 1 0.120030 47 1 0.951791 53 1 0.966545 57 1 0.814468 65 1 0.935711 70 1 0.717376 72 1 0.951351 79 1 0.677138 87 1 0.978656 88 1 0.956054 99 1 0.791441 105 1 0.839751 115 1 0.974097 117 1 0.690811 120 1 0.050756 127 1 0.974741 128 1 0.973127 132 1 0.959399 147 1 0.980912 151 1 0.782255 163 1 0.801069 165 0 0.021625 169 0 0.354187 172 0 0.581394 174 0 0.026519 177 0 0.945642 183 0 0.300279 185 0 0.158988 186 0 0.103607 192 0 0.143440 193 0 0.020258 195 0 0.203775 197 0 0.385867 202 0 0.235728 211 0 0.088580 231 0 0.025860 242 0 0.048832 243 0 0.033178 244 0 0.033073 251 0 0.026277 260 0 0.140285 265 0 0.162310 267 0 0.346325 271 0 0.206103 273 0 0.722772 277 0 0.960003 278 0 0.503853 280 0 0.046625 289 0 0.140886 291 0 0.124958 292 0 0.023169
Modulo1/Exercicios/Modulo1-Exercicio-BuzzfeedQuiz.ipynb
###Markdown Quiz do Buzzfeed:1. Mostrar opções2. Capturar inputs3. Calcular resultado com base nos inputs ###Code insuportabilidade = 0 print('Teste de Insuportabilidade') print('\nResponda as perguntas com s ou n minúsculos\n') resposta1 = input('Você costuma dar opiniões não solicitadas? ') if resposta1 == 's': insuportabilidade = insuportabilidade + 1 resposta2 = input('\nVocê gosta de falar mal dos outros? ') if resposta2 == 's': insuportabilidade = insuportabilidade + 1 resposta3 = input('\nVocê mastiga de boca aberta? ') if resposta3 == 's': insuportabilidade = insuportabilidade + 1 resposta4 = input('\nVocê fala muito sobre um mesmo assunto? ') if resposta4 == 's': insuportabilidade = insuportabilidade + 1 resposta5 = input('\nVocê costuma interromper quando as pessoas estão falando? ') if resposta5 == 's': insuportabilidade = insuportabilidade + 1 resposta6 = input('\nVocê gosta de fazer perguntas pessoais, mesmo sem ter intimidade com a pessoa? ') if resposta6 == 's': insuportabilidade = insuportabilidade + 1 resposta7 = input('\nVocê ouve música alta sem fone de ouvido? ') if resposta7 == 's': insuportabilidade = insuportabilidade + 1 resposta8 = input('\nVocê costuma tocar muito nas pessoas? ') if resposta8 == 's': insuportabilidade = insuportabilidade + 1 resposta9 = input('\nVocê manda indiretas com frequência? ') if resposta9 == 's': insuportabilidade = insuportabilidade + 1 resposta10 = input('\nVocê gosta de dar opiniões polêmicas só para ver como as pessoas vão reagir? ') if resposta10 == 's': insuportabilidade = insuportabilidade + 1 resposta11 = input('\nVocê fala de política mais do que de outros assuntos? ') if resposta11 == 's': insuportabilidade = insuportabilidade + 1 resposta12 = input('\nVocê fala muito alto? ') if resposta12 == 's': insuportabilidade = insuportabilidade + 1 resposta13 = input('\nVocê gosta das coisas sempre do seu jeito? ') if resposta13 == 's': insuportabilidade = insuportabilidade + 1 resposta14 = input('\nVocê prefere perguntar ou pedir ajuda a alguém antes de tentar descobrir as respostas sozinha? ') if resposta14 == 's': insuportabilidade = insuportabilidade + 1 resposta15 = input('\nVocê costuma puxar assunto com desconhecidos em filas e outros locais públicos? ') if resposta15 == 's': insuportabilidade = insuportabilidade + 1 if insuportabilidade == 0: print('\nVocê é a pessoa mais legal do mundo!') elif 0 < insuportabilidade <= 5: print('\nVocê tem seus momentos de chatice') elif 5 < insuportabilidade <= 10: print('\nTalvez você seja meio chato') else: print('\nNinguém te suporta') ###Output Teste de Insuportabilidade Responda as perguntas com s ou n minúsculos Você costuma dar opiniões não solicitadas?s Você gosta de falar mal dos outros? s Você mastiga de boca aberta? s Você fala muito sobre um mesmo assunto? s Você costuma interromper quando as pessoas estão falando? s Você gosta de fazer perguntas pessoais, mesmo sem ter intimidade com a pessoa? s Você ouve música alta sem fone de ouvido? s Você costuma tocar muito nas pessoas? s Você manda indiretas com frequência? s Você gosta de dar opiniões polêmicas só para ver como as pessoas vão reagir? s Você fala de política mais do que de outros assuntos? s Você fala muito alto?s Você gosta das coisas sempre do seu jeito?n Você prefere perguntar ou pedir ajuda a alguém antes de tentar descobrir as respostas sozinha?n Você costuma puxar assunto com desconhecidos em filas e outros locais públicos?n Ninguém te suporta
docs/contents/Extract.ipynb
###Markdown Extracting a molecular subsystem How to extract a molecular subsystem ###Code import molsysmt as msm molecular_system = msm.convert(msm.demo_systems.files['1sux.mmtf']) msm.info(molecular_system, target='entity') small_molecule = msm.extract(molecular_system, selection='molecule_type=="small_molecule"') msm.info(small_molecule) ###Output _____no_output_____
word-counter.ipynb
###Markdown Word counter using PySpark ###Code from pyspark.sql import SparkSession import pyspark.sql.functions as F spark = SparkSession.builder.getOrCreate() df = spark.read.text("./book-asset.txt") df = df.filter(F.col("value") != "") # Remove empty rows df.head(5) word_counts = ( df.withColumn("word", F.explode(F.split(F.col("value"), "\s+"))) .withColumn("word", F.regexp_replace("word", "[^\w]", "")) .groupBy("word") .count() .sort("count", ascending=False) ) word_counts.head(5) # Top 10 word_counts.show(10) # All words count word_counts.agg(F.sum("count").alias("count_all_words")).show() # Whale count word_counts.filter(F.col("word").rlike("(?i)whale")).agg( F.sum("count").alias("whale_count") ).show() # Unique count print("Unique words: ", word_counts.count()) ###Output [Stage 33:==============================================> (174 + 4) / 200]
PROJECT/0006-Fit-ML-R.ipynb
###Markdown Practical Machine Learning Course Project Yanal Kashou Introduction 1. Source for this project available here:The source for the training data is:https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv The source for the test data is:https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv 2. CreditMany thanks to the authors for providing this dataset for free use. http://groupware.les.inf.puc-rio.br/har Velloso, E.; Bulling, A.; Gellersen, H.; Ugulino, W.; Fuks, H. __Qualitative Activity Recognition of Weight Lifting Exercises__. Proceedings of 4th International Conference in Cooperation with SIGCHI (Augmented Human '13) . Stuttgart, Germany: ACM SIGCHI, 2013. SynopsisIn this project we are going to attempt to predict the testing dataset provided through the link above, through the training dataset also provided. Our first step will be to load the necessary libraries for our analysis. Our second step will be to read the data, partition the training dataset into 60% training and 40% validation. Next we need to clean it from any NA values or anomalies, and reduce it to a workable size. Our third step is to implement various machine learning algorithms, namely Decision Tree, Random Forest and Support Machine Vector to predict the testing dataset. Our fourth step is to assess the performance and accuracy of these methods. Our fifth and final step is to use the algorithm with the highest accuracy to effectively and accurately predict the test dataset provided. ###Code library(caret) #For training datasets and applying machine learning algorithms library(ggplot2) #For awesome plotting library(rpart) library(rpart.plot) library(rattle) library(randomForest) library(e1071) library(dplyr) set.seed(111) ###Output _____no_output_____ ###Markdown Data Loading, Cleaning and Partitioning 1. Loading and Reading ###Code # We will use url0 for the training dataset and url1 for the testing dataset url0 <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv" url1 <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv" # Download and read datasets train <- read.csv(url(url0)) test <- read.csv(url(url1)) ###Output _____no_output_____ ###Markdown 2. Partitioning ###Code #Partition the training data into 60% training and 40% validation and check dimensions. trainIndex <- createDataPartition(train$classe, p = .60, list = FALSE) trainData <- train[ trainIndex, ] validData <- train[-trainIndex, ] dim(trainData) dim(validData) ###Output _____no_output_____ ###Markdown 3. Cleaning ###Code # Both return 160 variables, many of which are filled with NA records # If over 90% of a variable is filled with NA then it is omitted from the training and test datasets trainData <- trainData[, colMeans(is.na(trainData)) < 0.1] validData <- validData[, colMeans(is.na(validData)) < 0.1] dim(trainData) dim(validData) # We can remove the first five colums which are ID columns, as well as the timestamp as we do not need it in this analysis. trainData <- trainData[, -(1:5)] validData <- validData[, -(1:5)] dim(trainData) dim(validData) # We can also remove all variables with nearly zero variance near.zero.var <- nearZeroVar(trainData) trainData <- trainData[, -near.zero.var] validData <- validData[, -near.zero.var] dim(trainData) dim(validData) ###Output _____no_output_____ ###Markdown We have now managed to reduce the number of variables from 160 to 54 and since both the `validData` and `trainData` have an equal number of variables, we can implement our prediction algorithms in an easier fashion. Prediction AlgorithmsWe will be exploring three distinct machine learning algorithms: * Decision Tree (rpart) * Random Forest (randomForest) * Support Machine Vector (svm) 1. Decision Tree (rpart) ###Code mod.train.dt <- train(classe ~ ., method = "rpart", data = trainData) mod.predict.dt <- predict(mod.train.dt, validData) cm.dt <- confusionMatrix(mod.predict.dt, validData$classe) print(mod.train.dt$finalModel) fancyRpartPlot(mod.train.dt$finalModel,cex=.5,under.cex=1,shadow.offset=0) ###Output _____no_output_____ ###Markdown 2. Random Forest Using `randomForest` package with 10-Fold Cross-Validation ###Code mod.train.rf <- randomForest(classe ~ ., data = trainData, mtry = 3, ntree = 200, do.trace = 25, cv.fold = 10) mod.predict.rf <- predict(mod.train.rf, validData) cm.rf <- confusionMatrix(mod.predict.rf, validData$classe) # Variable Importance According to Random Forest imp.rf <- importance(mod.train.rf) imp.rf.arranged <- arrange(as.data.frame(imp.rf), desc(MeanDecreaseGini)) head(imp.rf.arranged, 15) varImpPlot(mod.train.rf, n.var = 15, sort = TRUE, main = "Variable Importance", lcolor = "blue", bg = "purple") ###Output _____no_output_____ ###Markdown 3. Support Vector Machine ###Code mod.train.svm <- svm(classe ~ ., data = trainData) mod.predict.svm <- predict(mod.train.svm, validData) cm.svm <- confusionMatrix(mod.predict.svm, validData$classe) ###Output _____no_output_____ ###Markdown Compare Accuracies ###Code a.dt <- cm.dt$overall[1] a.rf <- cm.rf$overall[1] a.svm <- cm.svm$overall[1] cm.dataframe <- data.frame(Algorithm = c("Decision Tree", "Random Forest", "Support Vector Machine"), Index = c("dt", "rf", "svm"), Accuracy = c(a.dt, a.rf, a.svm)) cm.dataframe <- arrange(cm.dataframe, desc(Accuracy)) cm.dataframe ###Output _____no_output_____ ###Markdown We can clearly see that Random Forest has the highest accuracy at ~ 99.4%, followed by Support Vector Machine at ~ 94.6%. Decision Tree gave us the lowest accuracy at ~ 47.7%. Errors In Sample Error ###Code # In sample Error Rate InSampError.rf <- (1 - 0.994)*100 InSampError.rf ###Output _____no_output_____ ###Markdown We can see that the In Sample error is 0.6% Out of Sample Error ###Code # Out of sample Error Rate print(mod.train.rf) ###Output Call: randomForest(formula = classe ~ ., data = trainData, mtry = 3, ntree = 200, do.trace = 25, cv.fold = 10) Type of random forest: classification Number of trees: 200 No. of variables tried at each split: 3 OOB estimate of error rate: 0.46% Confusion matrix: A B C D E class.error A 3348 0 0 0 0 0.000000000 B 7 2269 3 0 0 0.004387889 C 0 9 2044 1 0 0.004868549 D 0 0 27 1902 1 0.014507772 E 0 0 1 5 2159 0.002771363 ###Markdown We can see that the OOB (Out of Bag) or Out of Sample Error of Random Forest with 10-Fold Cross Validation is 0.53%, which is consistent with the confusion matrix.However, it is worthy to note that Random Forest OOB estimation does not require Cross Validation to decrease bias.The In Sample Error is actually higher than the OOB, which is definitely considered an anomaly. It might be due to variance in the estimation of the error rates or due to overfitting. Nonetheless our prediction in the next section proves our model highly accurate. Final Prediction Using Random ForestPrediction Results of Algorithm with Highest Accuracy (Random Forest) ###Code fp.rf <- predict(mod.train.rf, newdata=test) fp.rf ###Output _____no_output_____
juvini_documentation.ipynb
###Markdown JUVINI A Comprehensive graphing tool for EDA. Like a profiler using Graphs.To install the package : `pip install juvini`- **[Introduction](introduction)** - **[Requirement](requirement)**- **[Usage](usage)**- **[Best Practices](best-practices)** Introduction Plotting graphs are one of the most important aspects of EDA. Graphs give intuitive insights because it is processed by our natural neural networks trained and evolved non stop for years. This tool is designed to allow data science users to work on plotting the graphs rather than spending time on codes to do it. This tool has several levels. Highest level is where the user just have to input the entire data frame and the code will take care of giving all plots based on the data type for all combinations. Just like the way pairplot works for numerical datatypes. Requirement 1. User should have some idea on python. This can be run from jupyter as well as python console2. Should have good understanding of different graph types especially boxplot , scatterplot , barplot , countplot and distplot3. This is not a must , but if the user has a clear understanding of the datatype associated with each column , then converting to the datatype will make the graph look better. For eg , if a column contains categorical value 1,2,3,4. Then it is better to convert it as object or category so that the tool will be able to guess it. Else it will assume the datatype as numeric and will plot for numeric related graphs.Note that there is feature within the juvini that will automatically consider a numeric column as category if the unique values in it are less than 54. The tool will always treat first column as X axis , second input column as Y axis and if parameter `hue_col` is specified then it will search for this column on rest of the dataframe. Usage consider the standard IRIS dataset.Here we modified it a bit to add a numeric column rating where values are 0.1.2.3. Even though it is categorical , we have purposely kept it as numerical column to show some use cases that will come in later sections.It consists of 5 columns1. sepal_length - numeric2. sepal_width - numeric3. petal_length - numeric4. petal_width - numeric5. species - categorical 6. rating - numeric ( in fact it is categorical in real scenario ) Sample output1. sepal_length,sepal_width,petal_length,petal_width,species,rating0. 5.1,3.5,1.4,0.2,setosa,11. 4.9,3.0,1.4,0.2,setosa,12. 4.7,3.2,1.3,0.2,setosa,03. 4.6,3.1,1.5,0.2,setosa,34. 5.0,3.6,1.4,0.2,setosa,05. 5.4,3.9,1.7,0.4,setosa,16. 4.6,3.4,1.4,0.3,setosa,37. 5.0,3.4,1.5,0.2,setosa,08. 4.4,2.9,1.4,0.2,setosa,19. 4.9,3.1,1.5,0.1,setosa,110. 5.4,3.7,1.5,0.2,setosa,0 NUMERIC vs NUMERIC - to plot graph where two columns are numeric. Method : `num_num(df[[num_col1,num_col2]])` Examplessimple numeric to numeric plotting ###Code import pandas as pd from juvini import num_num df=pd.read_csv('iris_with_rating.csv') num_num(df[['sepal_length','sepal_width']]) ###Output _____no_output_____ ###Markdown wait what if i want to add a hue parameter to it?Just make sure to add the additional column `species` to the input dataframe and also add the parameter `hue_col='species'` ###Code num_num(df[['sepal_length','sepal_width','species']],hue_col='species') ###Output _____no_output_____ ###Markdown additional parameters 1. x_name='xvalue' , the name that you want in x axis for the first column , sometimes the column name are different from the name you want to see in the graph.By default the first column name is taken2. y_name='yvalue' , same as x_name , but for Y axis3. size_figure=(13,4) , for playing around with the size. depending on size of the screen you may want to change it. default is 13,4 with tight layout4. hue_cols , to plot the hue. See the above example CATEGORICAL vs CATEGORICAL - to plot graph where two columns that are categorical. Method : `cat_cat(df[[cat_col1,cat_col2]])` Examples This will take the top 5 categories for each column and plot it. You can change this value 5 using parameters `xcap` and `ycap` as mentioned below.For each value of X , it will give the countplot for values in Y. Also the tool will take care of all subplots and figure size etc. User do not have to figure out the sizing and subplot grid size. ###Code import pandas as pd from juvini import cat_cat df=pd.read_csv('iris_with_rating.csv') cat_cat(df[['species','rating']]) ###Output _____no_output_____ ###Markdown similarly interchanging first and second column will change the axis`cat_cat(df[['rating','species']])` ###Code cat_cat(df[['rating','species']]) ###Output _____no_output_____ ###Markdown But wait , did we just use a numerical column to plot a categorical column?Actually yes , if we know that it is categorical , we do not have to change the datatype and all unnecessary things. the code will take care of converting it to category.Fine , but what if there are too many categories and i simply need to have a gist of top few categories?Yes that is also supported , simply provide the parameter `xcap=` , the code will sort the categories based on its count and choose the top n values based on the input. additional parameters 1. x_name='xvalue' , the name that you want in x axis for the first column , sometimes the column name are different from the name you want to see in the graph.By default the first column name is taken2. y_name='yvalue' , same as x_name , but for Y axis3. size_figure=(13,4) , for playing around with the size. depending on size of the screen you may want to change it. default is 13,4 with tight layout4. xcap=5 , will cap the maximum categories with top 5 based on its count for x axis 1st column , default 55. ycap=5 , same as xcap , but will be applicable to y column.6. hue_cols , to plot the hue. See the above example7. scols=3 , this is an experimental feature , use with caution. This parameter will control how many plots in one row. By default it is 37. others=True , this is an experimental feature , use with caution. This parameter will put all the other values that are not coming in the top values provided into a category called 'restall' CATEGORICAL vs NUMERICAL - to plot graph where two columns where x is category and y is numeric. Method : `cat_num(df[[cat_col1,num_col2]])` Examples This will take the top 5 categories of categorical column and plot numerical. You can change this value 5 using parameters `xcap` and `ycap` as mentioned below.For each value of X , it will give the boxplot corresponding to the numerical column in that. Additionally it will also give aggregate sum of the numerical values for each category.It is upto the user to decide which is useful. Boxplot is always useful , whereas the sum aggregate might help if you are looking at something like total votes etc. but if it is like sepal_width kind , then it may not be useful.Anyways no harm in giving both. ###Code import pandas as pd from juvini import cat_num df=pd.read_csv('iris_with_rating.csv') cat_num(df[['species','petal_length']]) ###Output _____no_output_____ ###Markdown Can we use a numerical column to plot a categorical column?Actually yes , if we know that it is categorical , we do not have to change the datatype and all unnecessary things. the code will take care of converting it to category as long as you provide the column as first column in the inputFine , but what if there are too many categories and i simply need to have a gist of top few categories?Yes that is also supported , simply provide the parameter `xcap=` , the code will sort the categories based on its count and choose the top n values based on the input.How about the hue?Yes , that also will work here. provide it like ###Code cat_num(df[['species','petal_length','rating']],hue_col='rating') ###Output _____no_output_____ ###Markdown additional parameters 1. x_name='xvalue' , the name that you want in x axis for the first column , sometimes the column name are different from the name you want to see in the graph.By default the first column name is taken2. y_name='yvalue' , same as x_name , but for Y axis3. size_figure=(13,4) , for playing around with the size. depending on size of the screen you may want to change it. default is 13,4 with tight layout4. xcap=5 , will cap the maximum categories with top 5 based on its count for x axis 1st column , default 56. hue_cols , to plot the hue. See the above example7. others=True , this is an experimental feature , use with caution. This parameter will put all the other values that are not coming in the top values provided into a category called 'restall'. There are ratings 0-3. If we cap it to only top 2. Then the rest of the ratings will go into "restall" value. ###Code cat_num(df[['rating','petal_length']],xcap=2,others=True) ###Output _____no_output_____ ###Markdown Single NUMERICAL - to plot graph with just a numerical column Method : `single_num(df[[num_col1]])` Examples It is not always the case that plot will need two columns. What if i just need to see a boxplot of a numeric column or the distribution of a numeric column?For that we have the method which will give boxplot and distplot. It is usually used with the hue to give more insights ###Code import pandas as pd from juvini import single_num df=pd.read_csv('iris_with_rating.csv') single_num(df[['sepal_length']]) ###Output _____no_output_____ ###Markdown How about the hue?Yes , that also will work here. provide it like ###Code single_num(df[['sepal_length','species']],hue_col='species') ###Output _____no_output_____ ###Markdown additional parameters 1. x_name='xvalue' , the name that you want in x axis for the first column , sometimes the column name are different from the name you want to see in the graph.By default the first column name is taken2. size_figure=(13,4) , for playing around with the size. depending on size of the screen you may want to change it. default is 13,4 with tight layout3. hue_cols , to plot the hue. See the above example Single CATEGORICAL - to plot graph with just a categorical column Method : `single_cat(df[[cat_col1]])` Examples It is not always the case that plot will need two columns. What if i just need to see a boxplot of a categorical column or the distribution of a numeric column?For that we have the method which will give boxplot and distplot. It is usually used with the hue to give more insights ###Code import pandas as pd from juvini import single_cat df=pd.read_csv('iris_with_rating.csv') single_cat(df[['species']]) ###Output _____no_output_____ ###Markdown Can we use a numerical column to plot a categorical column?Actually yes , if we know that it is categorical , we do not have to change the datatype and all unnecessary things. the code will take care of converting it to category as long as you provide the column as first column in the input Fine , but what if there are too many categories and i simply need to have a gist of top few categories?Yes that is also supported , simply provide the parameter `xcap=` , the code will sort the categories based on its count and choose the top n values based on the input. ###Code single_cat(df[['species']],xcap=2) ###Output _____no_output_____ ###Markdown Fine , what if i want to change not the xcap but the ycap?Yes we can do that as well. Simply change the parameter `ycap=` just like the xcap.How about the hue?Yes , that also will work here. provide it like `single_cat(df[['species','hue_column']],hue_col='hue_column)` ###Code single_cat(df[['species','rating']],hue_col='rating') ###Output _____no_output_____ ###Markdown additional parameters 1. x_name='xvalue' , the name that you want in x axis for the first column , sometimes the column name are different from the name you want to see in the graph.By default the first column name is taken2. size_figure=(13,4) , for playing around with the size. depending on size of the screen you may want to change it. default is 13,4 with tight layout3. hue_cols , to plot the hue. See the above example4. xcap=5 , will cap the maximum categories with top 5 based on its count for x axis 1st column , default 5 To make it more easier Method : `xy_autoplot(df[[col1,col2]])` ExamplesWhat if i do not even care what the data type is. I just want the code to decide it based on the data type already present.Can i do that?Yes. There is a method which does exactly this. You will have to simply give two columns. The first column will be taken as X variable and second as Y variable. And based on the data type it will provide you the necessary graph. ###Code import pandas as pd from juvini import xy_auto_plot df=pd.read_csv('iris_with_rating.csv') xy_auto_plot(df[['sepal_length','species']]) ###Output _____no_output_____ ###Markdown Does it support hue?Yes , you can use the same parameter `hue_col=` and if the graph can handle hue , then it will use it. ###Code xy_auto_plot(df[['sepal_length','species']],hue='rating') cat_num(df[['rating','sepal_length']]) ###Output _____no_output_____ ###Markdown Still better and most comfortable Method : `juvini_profile(df[[list_of_cols]])` ExamplesThis is the highest of all that combines all below features and give the entire story in a matter of one command. ###Code import pandas as pd from juvini import juvini_profile df=pd.read_csv('iris_with_rating.csv') juvini_profile(df,hue_col='species') ###Output Numerical columns: ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'rating'] Categorical columns: [] Analysis of numeric sepal_length and numeric sepal_length ###Markdown An easier way to get the only related graphs to the dependent variable In many cases we may not need all sorts of graph but rather interested in seeing the graph related to the target variable, to do use the feature `juvini_against_target(df[col_list],target_col=)` ###Code import pandas as pd from juvini import juvini_profile df=pd.read_csv('iris_with_rating.csv') juvini_against_target(df,target_col='species') ###Output _____no_output_____
notebooks/day-6/causal/example2.ipynb
###Markdown (source: https://causalinference.gitlab.io/dowhy/do_why_estimation_methods.html) ###Code #!pip install git+https://github.com/microsoft/dowhy.git ###Output _____no_output_____ ###Markdown DoWhy: Different estimation methods for causal inferenceThis is quick introduction to DoWhy causal inference library. We will load in a sample dataset and use different methods for estimating causal effect from a (pre-specified) treatment variable to a (pre-specified) outcome variable. ###Code import numpy as np import pandas as pd import logging import dowhy from dowhy.do_why import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Let us first load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome.Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_samples=10000, treatment_is_binary=True) df = data["df"] data ###Output _____no_output_____ ###Markdown Note that we are using a pandas dataframe to load the data. Identifying the causal estimandWe now input a causal graph in the DOT graph format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"], instruments=data["instrument_names"], logging_level = logging.INFO ) model.view_model() ###Output _____no_output_____ ###Markdown We get a causal graph. Now identification and estimation is done. ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output _____no_output_____ ###Markdown Method 1: RegressionUse linear regression. ###Code causal_estimate_reg = model.estimate_effect(identified_estimand, method_name="backdoor.linear_regression", test_significance=True) print(causal_estimate_reg) print("Causal Estimate is " + str(causal_estimate_reg.value)) ###Output _____no_output_____ ###Markdown Method 2: StratificationWe will be using propensity scores to stratify units in the data. ###Code causal_estimate_strat = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate_strat) print("Causal Estimate is " + str(causal_estimate_strat.value)) ###Output _____no_output_____ ###Markdown Method 3: MatchingWe will be using propensity scores to match units in the data. ###Code causal_estimate_match = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_matching") print(causal_estimate_match) print("Causal Estimate is " + str(causal_estimate_match.value)) ###Output _____no_output_____ ###Markdown Method 4: WeightingWe will be using (inverse) propensity scores to assign weights to units in the data. ###Code causal_estimate_ipw = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_weighting") print(causal_estimate_ipw) print("Causal Estimate is " + str(causal_estimate_ipw.value)) ###Output _____no_output_____
python notebooks/Cancer_Prediction.ipynb
###Markdown Applying Hyperparameter Tuning ###Code from sklearn.model_selection import RandomizedSearchCV classifier = RandomForestClassifier(n_jobs = -1) from scipy.stats import randint param_dist={'max_depth':[3,5,10,None], 'n_estimators':[10,100,200,300,400,500], 'max_features':randint(1,27), 'criterion':['gini','entropy'], 'bootstrap':[True,False], 'min_samples_leaf':randint(1,27), } search_clfr = RandomizedSearchCV(classifier, param_distributions = param_dist, n_jobs=-1, n_iter = 40, cv = 9) search_clfr.fit(X_train, y_train) params = search_clfr.best_params_ score = search_clfr.best_score_ print(params) print(score) claasifier=RandomForestClassifier(n_jobs=-1, n_estimators=200,bootstrap= True,criterion='gini',max_depth=20,max_features=8,min_samples_leaf= 1) classifier.fit(X_train, y_train) confusion_matrix(y_test, classifier.predict(X_test)) print(f"Accuracy is {round(accuracy_score(y_test, classifier.predict(X_test))*100,2)}%") import pickle pickle.dump(classifier, open('cancer.pkl', 'wb')) ###Output _____no_output_____
keras.ipynb
###Markdown ReLU ###Code test_activation("relu") ###Output Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 5 hidden layers: 0.9851 +- 0.3370 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 10 hidden layers: 1.0463 +- 0.3600 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 15 hidden layers: 1.1855 +- 0.5735 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 20 hidden layers: 10.3589 +- 17.3650 ###Markdown ELU ###Code test_activation("elu") ###Output Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 5 hidden layers: 1.5566 +- 0.8634 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 10 hidden layers: 0.9856 +- 0.3927 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 15 hidden layers: 1.2344 +- 0.7970 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 20 hidden layers: 0.8102 +- 0.1938 ###Markdown SELU ###Code test_activation("selu") ###Output Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 5 hidden layers: 1.4322 +- 0.4293 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 10 hidden layers: 1.4721 +- 0.2673 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 15 hidden layers: 1.7472 +- 0.3362 Test 1 ... Test 2 ... Test 3 ... Test 4 ... Test 5 ... --> 20 hidden layers: 3.3672 +- 0.6534 ###Markdown Writing your first Neural Net [based on this blog](https://towardsdatascience.com/writing-your-first-neural-net-in-less-than-30-lines-of-code-with-keras-18e160a35502)[Recognizing numbers](https://en.wikipedia.org/wiki/MNIST_database) is the hello world of image recognition machine learning. This is a example uses Keras. [Keras](https://keras.io/) is a high-level neural networks API. Don't be itimidated by all the tems and definitions. There are a lot of links to wikipedia articles for starting reading on several items, but the video realy contains all you need to know. ![image.png](attachment:image.png) Prereqs Asuming you have set up Anaconda and you are able to run jupyther notebooks, all you need to do is make sure Keras with the TensorFlow backend is installed `pip3 install Keras` `pip3 install Tensorflow` What is a neural network some buzz wordsA neural network passes information contained within a **Vector or Scalar** through **layers** where the output of one layer, acts as the input into the next. While traveling through these layers the input is modified by **weight** and **bias** and sent to the [**activation function**](https://en.wikipedia.org/wiki/Activation_function) to map the output. The learning then occurs via a [**Cost (or loss) function**](https://en.wikipedia.org/wiki/Loss_function), that compares the actual output and the desired output, which in turn helps the function alters and adjusts the weights and biases to minimize the cost via a process called [**backpropagation**](https://en.wikipedia.org/wiki/Backpropagation) [**Gradient descent**](https://en.wikipedia.org/wiki/Gradient_descent) algorithm optimizes the model by calculating the gradient of the loss function. Backpropagation computes the gradient , gradient descent uses the gradients for trainingBetter see this video from youtube channel [3blue1brown](https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw), or watch it embedded in the notebook below ###Code #or watch the embbed video in the notebook from IPython.display import IFrame IFrame("https://www.youtube.com/embed/aircAruvnKk",663,382) ###Output _____no_output_____ ###Markdown Lets start We start with importing the keras Python modules. ###Code from keras.datasets import mnist from keras import models from keras import layers from keras.utils import to_categorical ###Output _____no_output_____ ###Markdown We load the image database and split the dataset into train and test sets.The images are available as [dataset](https://keras.io/datasets/mnist-database-of-handwritten-digits) the keras datasets ###Code (train_images, train_labels), (test_images, test_labels) = mnist.load_data() ###Output _____no_output_____ ###Markdown Build our model * initialize a sequential model called network;* add the neural network layers. A dense layer means that each neuron receives input from all the neurons in the previous layer784 (28 * 28) and 10 are the dimension of the output, since we have to predict a number we end with 10 and we start with the number of pixels in the image.We end with a layer The input_shape is the shape of the picture in our case 28 * 28 pixels and the activation is [relu](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)), the activation function we use for calculating the output. The last layer uses [softmax](https://en.wikipedia.org/wiki/Softmax_function) as activation function. ###Code network = models.Sequential() network.add(layers.Dense(784, activation='relu', input_shape=(28 * 28,))) network.add(layers.Dense(784, activation='relu', input_shape=(28 * 28,))) network.add(layers.Dense(10, activation='softmax')) ###Output _____no_output_____ ###Markdown compile the network We now configure the learning process. the network needs to know these three things: * the optimizer algoritm [adam](https://en.wikipedia.org/wiki/Stochastic_gradient_descentAdam) * the loss function [categorical crossetropy](https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/loss-functions/categorical-crossentropy) which is usefull when for one category classifications (note that this is why we need the softmax in the last layer.) * the metric used to judge the performance of the model ###Code network.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Reshaping the data All the input data must be shaped into the format the model uses. We reshape our data and split it between 60,000 train images (28 *28 pixels) and 10,000 test images (28 *28 pixels). _(Note that is the shape of our input images was already (28 * 28), but in more complex case you need to reshape anyway)_ ###Code train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype('float32') / 255 ###Output _____no_output_____ ###Markdown We will also need to encode the data. We use [categorical encoding](https://keras.io/utils/to_categorical). This is needed for use with categorical_crossentropy loss function. ###Code train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) ###Output _____no_output_____ ###Markdown Run it! To run it call the fit function and pass in our required parameters.The values are choosen to optimize perfomance and accuracy. * Batch: a set of 128 images that are processed independently in parallel. * Epoch: one pass over the entire dataset, used to separate training into distinct phases, which is useful for logging and periodic evaluation. ###Code network.fit(train_images, train_labels, epochs=5, batch_size=128) ###Output _____no_output_____ ###Markdown Deep LearningDeep learning is a subset of Machine learning. Features are given machine learning manually. On the other hand, deep learning learns features directly from data.- Parameters are weight and bias.- Weights: coefficients of each pixels- Bias: intercept- z = (w.t)x + b => z equals to (transpose of weights times input x) + bias- In an other saying => z = b + px1w1 + px2w2 + ... + px4096*w4096- y_head = sigmoid(z)- Sigmoid function makes z between zero and one so that is probability. You can see sigmoid function in computation graph.- Lambda layer performs simple arithmetic operations like sum, average, exponentiation etc```pythonmodel.add(Lambda(standardize,input_shape=(28,28,1)))def standardize(x): return x.mean()/x.stdev()```__Why we use sigmoid function?__It gives probabilistic result. It is derivative so we can use it in gradient descent algorithm (we will see as soon.)Lets say we find z = 4 and put z into sigmoid function. The result(y_head) is almost 0.9. It means that our classification result is 1 with 90% probability.Adam is one of the most effective optimization algorithms for training neural networks. Some advantages of Adam is that relatively low memory requirements and usually works well even with little tuning of hyperparameters Keras - models - layers - callback - optimizers - metric - losses - utils - constraints - data preprocessing __models__The core data structures of Keras are layers and models. The simplest type of model is the Sequential model, a linear stack of layers. For more complex architectures, you should use the Keras functional API, which allows to build arbitrary graphs of layers, or write models entirely from scratch via subclasssing.__How to restrict weights in a range in keras__```pythonfrom keras.constraints import max_normmodel.add(Convolution2D(32, 3, 3, input_shape=(3, 32, 32), border_mode='same', activation='relu', kernel_constraint=max_norm(3)))```Constraining the weight matrix directly is another kind of regularization. If you use a simple L2 regularization term you penalize high weights with your loss function. With this constraint, you regularize directly. As also linked in the keras code, this seems to work especially well in combination with a dropoutlayer. Bidirectinal LSTMRNN architectures like LSTM and BiLSTM are used in occasions where the learning problem is sequential,LSTMs and their bidirectional variants are popular because they have tried to learn how and when to forget and when not to using gates in their architecture. In previous RNN architectures, vanishing gradients was a big problem and caused those nets not to learn so much.Using Bidirectional LSTMs, you feed the learning algorithm with the original data once from beginning to the end and once from end to beginning. The term bidirectional means that you'll run your input in two direction (from past to future and from future to past).Unidirectional LSTM only preserves information of the past because the only inputs it has seen are from the past.Using bidirectional will run your inputs in two ways, one from past to future and one from future to past and what differs this approach from unidirectional is that in the LSTM that runs backwards, you preserve information from the future and using the two hidden states combined you are able in any point in time to preserve information from both past and future. GRU vs LSTMThe key difference between a GRU and an LSTM is that a GRU has two gates (reset and update gates) whereas an LSTM has three gates (namely input, output and forget gates).The GRU controls the flow of information like the LSTM unit, but without having to use a memory unit. It just exposes the full hidden content without any control. __Sample:__ one element of a dataset. For instance, one image is a sample in a convolutional network. One audio snippet is a sample for a speech recognition model.__Batch:__ a set of N samples. The samples in a batch are processed independently, in parallel. If training, a batch results in only one update to the model. A batch generally approximates the distribution of the input data better than a single input. The larger the batch, the better the approximation; however, it is also true that the batch will take longer to process and will still result in only one update. For inference (evaluate/predict), it is recommended to pick a batch size that is as large as you can afford without going out of memory (since larger batches will usually result in faster evaluation/prediction).__Epoch:__ an arbitrary cutoff, generally defined as "one pass over the entire dataset", used to separate training into distinct phases, which is useful for logging and periodic evaluation. When using validation_data or validation_split with the fit method of Keras models, evaluation will be run at the end of every epoch. Within Keras, there is the ability to add callbacks specifically designed to be run at the end of an epoch. Examples of these are learning rate changes and model checkpointing (saving).__EarlyStopping callback:__ interrupt training when the validation loss isn't decreasing anymore?__ModelCheckpoint callback:__ To ensure the ability to recover from an interrupted training run at any time (fault tolerance), you should use a callback that regularly saves your model to disk. You should also set up your code to optionally reload that model at startup. ###Code import numpy as np from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional from keras.datasets import imdb n_unique_words = 10000 # cut texts after this number of words maxlen = 200 batch_size = 128 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=n_unique_words) x_train = sequence.pad_sequences(x_train, maxlen=maxlen) x_test = sequence.pad_sequences(x_test, maxlen=maxlen) y_train = np.array(y_train) y_test = np.array(y_test) model = Sequential() model.add(Embedding(n_unique_words, 128, input_length=maxlen)) model.add(Bidirectional(LSTM(64))) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print('Train...') model.fit(x_train, y_train, batch_size=batch_size, epochs=4, validation_data=[x_test, y_test]) ###Output Using TensorFlow backend. ###Markdown ###Code # 케라스 패키지 임포트 import tensorflow.keras import numpy # 데이터 지정 x = numpy.array([0, 1, 2, 3, 4]) y = x * 2 + 1 # 인공신경망 모델링 model = tensorflow.keras.models.Sequential() model.add(tensorflow.keras.layers.Dense(1, input_shape=(1,))) model.compile('SGD', 'mse') # 주어진 데이터로 모델 학습 model.fit(x[:2], y[:2], epochs=1000, verbose=0) # 성능 평가 print('Targets:', y[2:]) print('Predictions:', model.predict(x[2:]).flatten()) ###Output Targets: [5 7 9] Predictions: [4.9513407 6.9161305 8.88092 ] ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Keras View on TensorFlow.org Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup: ###Code !pip install -q pyyaml # Required to save models in YAML format from __future__ import absolute_import, division, print_function import tensorflow as tf from tensorflow.keras import layers print(tf.VERSION) print(tf.keras.__version__) ###Output /home/vinc3/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters ###Markdown `tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron): ###Code model = tf.keras.Sequential() # Adds a densely-connected layer with 64 units to the model: model.add(layers.Dense(64, activation='relu')) # Add another: model.add(layers.Dense(64, activation='relu')) # Add a softmax layer with 10 output units: model.add(layers.Dense(10, activation='softmax')) ###Output _____no_output_____ ###Markdown Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. This defaults to the `"Glorot uniform"` initializer.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments: ###Code # Create a sigmoid layer: layers.Dense(64, activation='sigmoid') # Or: layers.Dense(64, activation=tf.sigmoid) # A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix: layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01)) # A linear layer with L2 regularization of factor 0.01 applied to the bias vector: layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01)) # A linear layer with a kernel initialized to a random orthogonal matrix: layers.Dense(64, kernel_initializer='orthogonal') # A linear layer with a bias vector initialized to 2.0s: layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0)) ###Output _____no_output_____ ###Markdown Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method: ###Code model = tf.keras.Sequential([ # Adds a densely-connected layer with 64 units to the model: layers.Dense(64, activation='relu', input_shape=(32,)), # Add another: layers.Dense(64, activation='relu'), # Add a softmax layer with 10 output units: layers.Dense(10, activation='softmax')]) model.compile(optimizer=tf.train.AdamOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown `tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training: ###Code # Configure a model for mean-squared error regression. model.compile(optimizer=tf.train.AdamOptimizer(0.01), loss='mse', # mean squared error metrics=['mae']) # mean absolute error # Configure a model for categorical classification. model.compile(optimizer=tf.train.RMSPropOptimizer(0.01), loss=tf.keras.losses.categorical_crossentropy, metrics=[tf.keras.metrics.categorical_accuracy]) ###Output _____no_output_____ ###Markdown Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method: ###Code import numpy as np def random_one_hot_labels(shape): n, n_class = shape classes = np.random.randint(0, n_class, n) labels = np.zeros((n, n_class)) labels[np.arange(n), classes] = 1 return labels data = np.random.random((1000, 32)) labels = random_one_hot_labels((1000, 10)) model.fit(data, labels, epochs=10, batch_size=32) ###Output Epoch 1/10 1000/1000 [==============================] - 2s 2ms/step - loss: 2.3540 - categorical_accuracy: 0.0790 Epoch 2/10 1000/1000 [==============================] - 0s 124us/step - loss: 2.3146 - categorical_accuracy: 0.0890 Epoch 3/10 1000/1000 [==============================] - 0s 102us/step - loss: 2.3090 - categorical_accuracy: 0.1060 Epoch 4/10 1000/1000 [==============================] - 0s 151us/step - loss: 2.3111 - categorical_accuracy: 0.1020 Epoch 5/10 1000/1000 [==============================] - 0s 141us/step - loss: 2.3022 - categorical_accuracy: 0.1260 Epoch 6/10 1000/1000 [==============================] - 0s 140us/step - loss: 2.2858 - categorical_accuracy: 0.1440 Epoch 7/10 1000/1000 [==============================] - 0s 148us/step - loss: 2.2781 - categorical_accuracy: 0.1290 Epoch 8/10 1000/1000 [==============================] - 0s 163us/step - loss: 2.2543 - categorical_accuracy: 0.1750 Epoch 9/10 1000/1000 [==============================] - 0s 135us/step - loss: 2.2335 - categorical_accuracy: 0.1500 Epoch 10/10 1000/1000 [==============================] - 0s 162us/step - loss: 2.2111 - categorical_accuracy: 0.1840 ###Markdown `tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`: ###Code import numpy as np data = np.random.random((1000, 32)) labels = random_one_hot_labels((1000, 10)) val_data = np.random.random((100, 32)) val_labels = random_one_hot_labels((100, 10)) model.fit(data, labels, epochs=10, batch_size=32, validation_data=(val_data, val_labels)) ###Output Train on 1000 samples, validate on 100 samples Epoch 1/10 1000/1000 [==============================] - 0s 199us/step - loss: 2.3255 - categorical_accuracy: 0.1050 - val_loss: 2.3221 - val_categorical_accuracy: 0.1000 Epoch 2/10 1000/1000 [==============================] - 0s 102us/step - loss: 2.3006 - categorical_accuracy: 0.1050 - val_loss: 2.3592 - val_categorical_accuracy: 0.1200 Epoch 3/10 1000/1000 [==============================] - 0s 166us/step - loss: 2.3044 - categorical_accuracy: 0.1100 - val_loss: 2.3149 - val_categorical_accuracy: 0.0600 Epoch 4/10 1000/1000 [==============================] - ETA: 0s - loss: 2.2885 - categorical_accuracy: 0.13 - 0s 140us/step - loss: 2.2895 - categorical_accuracy: 0.1360 - val_loss: 2.3401 - val_categorical_accuracy: 0.1100 Epoch 5/10 1000/1000 [==============================] - 0s 175us/step - loss: 2.2851 - categorical_accuracy: 0.1240 - val_loss: 2.3539 - val_categorical_accuracy: 0.0100 Epoch 6/10 1000/1000 [==============================] - 0s 142us/step - loss: 2.2607 - categorical_accuracy: 0.1560 - val_loss: 2.2890 - val_categorical_accuracy: 0.1200 Epoch 7/10 1000/1000 [==============================] - 0s 147us/step - loss: 2.2412 - categorical_accuracy: 0.1570 - val_loss: 2.3507 - val_categorical_accuracy: 0.1000 Epoch 8/10 1000/1000 [==============================] - 0s 148us/step - loss: 2.2125 - categorical_accuracy: 0.1840 - val_loss: 2.3209 - val_categorical_accuracy: 0.1100 Epoch 9/10 1000/1000 [==============================] - 0s 160us/step - loss: 2.1950 - categorical_accuracy: 0.1970 - val_loss: 2.3493 - val_categorical_accuracy: 0.1500 Epoch 10/10 1000/1000 [==============================] - 0s 166us/step - loss: 2.1773 - categorical_accuracy: 0.2070 - val_loss: 2.4710 - val_categorical_accuracy: 0.0600 ###Markdown Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method: ###Code # Instantiates a toy dataset instance: dataset = tf.data.Dataset.from_tensor_slices((data, labels)) dataset = dataset.batch(32) dataset = dataset.repeat() # Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset. model.fit(dataset, epochs=10, steps_per_epoch=30) ###Output Epoch 1/10 30/30 [==============================] - 0s 12ms/step - loss: 2.1407 - categorical_accuracy: 0.2000 Epoch 2/10 30/30 [==============================] - 0s 4ms/step - loss: 2.1056 - categorical_accuracy: 0.2292 Epoch 3/10 30/30 [==============================] - 0s 5ms/step - loss: 2.0675 - categorical_accuracy: 0.2510 Epoch 4/10 30/30 [==============================] - 0s 5ms/step - loss: 2.0419 - categorical_accuracy: 0.2542 Epoch 5/10 30/30 [==============================] - 0s 5ms/step - loss: 1.9990 - categorical_accuracy: 0.2688 Epoch 6/10 30/30 [==============================] - 0s 6ms/step - loss: 1.9606 - categorical_accuracy: 0.2760 Epoch 7/10 30/30 [==============================] - 0s 5ms/step - loss: 1.9302 - categorical_accuracy: 0.2979 Epoch 8/10 30/30 [==============================] - 0s 5ms/step - loss: 1.8925 - categorical_accuracy: 0.3167 Epoch 9/10 30/30 [==============================] - 0s 5ms/step - loss: 1.8349 - categorical_accuracy: 0.3333 Epoch 10/10 30/30 [==============================] - 0s 5ms/step - loss: 1.8111 - categorical_accuracy: 0.3562 ###Markdown Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation: ###Code dataset = tf.data.Dataset.from_tensor_slices((data, labels)) dataset = dataset.batch(32).repeat() val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels)) val_dataset = val_dataset.batch(32).repeat() model.fit(dataset, epochs=10, steps_per_epoch=30, validation_data=val_dataset, validation_steps=3) ###Output Epoch 1/10 30/30 [==============================] - 0s 15ms/step - loss: 1.8156 - categorical_accuracy: 0.3448 - val_loss: 2.7179 - val_categorical_accuracy: 0.0833 Epoch 2/10 30/30 [==============================] - 0s 5ms/step - loss: 1.7482 - categorical_accuracy: 0.3604 - val_loss: 2.7707 - val_categorical_accuracy: 0.0312 Epoch 3/10 30/30 [==============================] - 0s 5ms/step - loss: 1.7041 - categorical_accuracy: 0.3979 - val_loss: 2.8149 - val_categorical_accuracy: 0.0312 Epoch 4/10 30/30 [==============================] - 0s 5ms/step - loss: 1.7045 - categorical_accuracy: 0.3917 - val_loss: 3.3738 - val_categorical_accuracy: 0.0729 Epoch 5/10 30/30 [==============================] - 0s 6ms/step - loss: 1.6443 - categorical_accuracy: 0.4271 - val_loss: 3.1136 - val_categorical_accuracy: 0.1146 Epoch 6/10 30/30 [==============================] - 0s 5ms/step - loss: 1.6521 - categorical_accuracy: 0.4073 - val_loss: 3.5981 - val_categorical_accuracy: 0.1042 Epoch 7/10 30/30 [==============================] - 0s 6ms/step - loss: 1.6231 - categorical_accuracy: 0.4448 - val_loss: 3.4140 - val_categorical_accuracy: 0.0312 Epoch 8/10 30/30 [==============================] - 0s 7ms/step - loss: 1.5952 - categorical_accuracy: 0.4437 - val_loss: 3.2468 - val_categorical_accuracy: 0.0938 Epoch 9/10 30/30 [==============================] - 0s 5ms/step - loss: 1.5510 - categorical_accuracy: 0.4583 - val_loss: 3.2180 - val_categorical_accuracy: 0.0833 Epoch 10/10 30/30 [==============================] - 0s 4ms/step - loss: 1.5317 - categorical_accuracy: 0.4677 - val_loss: 3.4781 - val_categorical_accuracy: 0.0625 ###Markdown Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided: ###Code data = np.random.random((1000, 32)) labels = random_one_hot_labels((1000, 10)) model.evaluate(data, labels, batch_size=32) model.evaluate(dataset, steps=30) ###Output 1000/1000 [==============================] - 0s 143us/step 30/30 [==============================] - 0s 6ms/step ###Markdown And to *predict* the output of the last layer in inference for the data provided,as a NumPy array: ###Code result = model.predict(data, batch_size=32) print(result.shape) ###Output (1000, 10) ###Markdown Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork: ###Code inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor # A layer instance is callable on a tensor, and returns a tensor. x = layers.Dense(64, activation='relu')(inputs) x = layers.Dense(64, activation='relu')(x) predictions = layers.Dense(10, activation='softmax')(x) ###Output _____no_output_____ ###Markdown Instantiate the model given inputs and outputs. ###Code model = tf.keras.Model(inputs=inputs, outputs=predictions) # The compile step specifies the training configuration. model.compile(optimizer=tf.train.RMSPropOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy']) # Trains for 5 epochs model.fit(data, labels, batch_size=32, epochs=5) ###Output Epoch 1/5 1000/1000 [==============================] - 1s 727us/step - loss: 2.3190 - acc: 0.1030 Epoch 2/5 1000/1000 [==============================] - 0s 192us/step - loss: 2.3156 - acc: 0.1020 Epoch 3/5 1000/1000 [==============================] - 0s 157us/step - loss: 2.3093 - acc: 0.1130 Epoch 4/5 1000/1000 [==============================] - 0s 110us/step - loss: 2.3012 - acc: 0.1250 Epoch 5/5 1000/1000 [==============================] - 0s 117us/step - loss: 2.2907 - acc: 0.1320 ###Markdown Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.md) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass: ###Code class MyModel(tf.keras.Model): def __init__(self, num_classes=10): super(MyModel, self).__init__(name='my_model') self.num_classes = num_classes # Define your layers here. self.dense_1 = layers.Dense(32, activation='relu') self.dense_2 = layers.Dense(num_classes, activation='sigmoid') def call(self, inputs): # Define your forward pass here, # using layers you previously defined (in `__init__`). x = self.dense_1(inputs) return self.dense_2(x) def compute_output_shape(self, input_shape): # You need to override this function if you want to use the subclassed model # as part of a functional-style model. # Otherwise, this method is optional. shape = tf.TensorShape(input_shape).as_list() shape[-1] = self.num_classes return tf.TensorShape(shape) ###Output _____no_output_____ ###Markdown Instantiate the new model class: ###Code model = MyModel(num_classes=10) # The compile step specifies the training configuration. model.compile(optimizer=tf.train.RMSPropOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy']) # Trains for 5 epochs. model.fit(data, labels, batch_size=32, epochs=5) ###Output Epoch 1/5 1000/1000 [==============================] - 1s 760us/step - loss: 2.3137 - acc: 0.1080 Epoch 2/5 1000/1000 [==============================] - 0s 113us/step - loss: 2.3125 - acc: 0.1070 Epoch 3/5 1000/1000 [==============================] - 0s 108us/step - loss: 2.3090 - acc: 0.1070 Epoch 4/5 1000/1000 [==============================] - 0s 102us/step - loss: 2.3030 - acc: 0.1030 Epoch 5/5 1000/1000 [==============================] - 0s 100us/step - loss: 2.2980 - acc: 0.1160 ###Markdown Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix: ###Code class MyLayer(layers.Layer): def __init__(self, output_dim, **kwargs): self.output_dim = output_dim super(MyLayer, self).__init__(**kwargs) def build(self, input_shape): shape = tf.TensorShape((input_shape[1], self.output_dim)) # Create a trainable weight variable for this layer. self.kernel = self.add_weight(name='kernel', shape=shape, initializer='uniform', trainable=True) # Make sure to call the `build` method at the end super(MyLayer, self).build(input_shape) def call(self, inputs): return tf.matmul(inputs, self.kernel) def compute_output_shape(self, input_shape): shape = tf.TensorShape(input_shape).as_list() shape[-1] = self.output_dim return tf.TensorShape(shape) def get_config(self): base_config = super(MyLayer, self).get_config() base_config['output_dim'] = self.output_dim return base_config @classmethod def from_config(cls, config): return cls(**config) ###Output _____no_output_____ ###Markdown Create a model using your custom layer: ###Code model = tf.keras.Sequential([ MyLayer(10), layers.Activation('softmax')]) # The compile step specifies the training configuration model.compile(optimizer=tf.train.RMSPropOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy']) # Trains for 5 epochs. model.fit(data, labels, batch_size=32, epochs=5) ###Output Epoch 1/5 1000/1000 [==============================] - 1s 636us/step - loss: 2.3046 - acc: 0.1040 Epoch 2/5 1000/1000 [==============================] - 0s 158us/step - loss: 2.3031 - acc: 0.1160 Epoch 3/5 1000/1000 [==============================] - 0s 157us/step - loss: 2.3007 - acc: 0.1100 Epoch 4/5 1000/1000 [==============================] - 0s 125us/step - loss: 2.2990 - acc: 0.1190 Epoch 5/5 1000/1000 [==============================] - 0s 137us/step - loss: 2.2964 - acc: 0.1220 ###Markdown CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](./summaries_and_tensorboard.md).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method: ###Code callbacks = [ # Interrupt training if `val_loss` stops improving for over 2 epochs tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'), # Write TensorBoard logs to `./logs` directory tf.keras.callbacks.TensorBoard(log_dir='./logs') ] model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks, validation_data=(val_data, val_labels)) ###Output Train on 1000 samples, validate on 100 samples Epoch 1/5 1000/1000 [==============================] - 0s 428us/step - loss: 2.2948 - acc: 0.1230 - val_loss: 2.3107 - val_acc: 0.1300 Epoch 2/5 1000/1000 [==============================] - 0s 159us/step - loss: 2.2925 - acc: 0.1310 - val_loss: 2.3099 - val_acc: 0.1400 Epoch 3/5 1000/1000 [==============================] - 0s 146us/step - loss: 2.2910 - acc: 0.1250 - val_loss: 2.3099 - val_acc: 0.0900 Epoch 4/5 1000/1000 [==============================] - 0s 194us/step - loss: 2.2887 - acc: 0.1300 - val_loss: 2.3095 - val_acc: 0.1600 Epoch 5/5 1000/1000 [==============================] - 0s 205us/step - loss: 2.2866 - acc: 0.1320 - val_loss: 2.3099 - val_acc: 0.1500 ###Markdown Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`: ###Code model = tf.keras.Sequential([ layers.Dense(64, activation='relu', input_shape=(32,)), layers.Dense(10, activation='softmax')]) model.compile(optimizer=tf.train.AdamOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy']) # Save weights to a TensorFlow Checkpoint file model.save_weights('./weights/my_model') # Restore the model's state, # this requires a model with the same architecture. model.load_weights('./weights/my_model') ###Output _____no_output_____ ###Markdown By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras): ###Code # Save weights to a HDF5 file model.save_weights('my_model.h5', save_format='h5') # Restore the model's state model.load_weights('my_model.h5') ###Output _____no_output_____ ###Markdown Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats: ###Code # Serialize a model to JSON format json_string = model.to_json() json_string import json import pprint pprint.pprint(json.loads(json_string)) ###Output {'backend': 'tensorflow', 'class_name': 'Sequential', 'config': {'layers': [{'class_name': 'Dense', 'config': {'activation': 'relu', 'activity_regularizer': None, 'batch_input_shape': [None, 32], 'bias_constraint': None, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'bias_regularizer': None, 'dtype': 'float32', 'kernel_constraint': None, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'dtype': 'float32', 'seed': None}}, 'kernel_regularizer': None, 'name': 'dense_17', 'trainable': True, 'units': 64, 'use_bias': True}}, {'class_name': 'Dense', 'config': {'activation': 'softmax', 'activity_regularizer': None, 'bias_constraint': None, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'bias_regularizer': None, 'dtype': 'float32', 'kernel_constraint': None, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'dtype': 'float32', 'seed': None}}, 'kernel_regularizer': None, 'name': 'dense_18', 'trainable': True, 'units': 10, 'use_bias': True}}], 'name': 'sequential_3'}, 'keras_version': '2.1.6-tf'} ###Markdown Recreate the model (newly initialized) from the JSON: ###Code fresh_model = tf.keras.models.model_from_json(json_string) ###Output _____no_output_____ ###Markdown Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*: ###Code yaml_string = model.to_yaml() print(yaml_string) ###Output backend: tensorflow class_name: Sequential config: layers: - class_name: Dense config: activation: relu activity_regularizer: null batch_input_shape: !!python/tuple [null, 32] bias_constraint: null bias_initializer: class_name: Zeros config: {dtype: float32} bias_regularizer: null dtype: float32 kernel_constraint: null kernel_initializer: class_name: GlorotUniform config: {dtype: float32, seed: null} kernel_regularizer: null name: dense_17 trainable: true units: 64 use_bias: true - class_name: Dense config: activation: softmax activity_regularizer: null bias_constraint: null bias_initializer: class_name: Zeros config: {dtype: float32} bias_regularizer: null dtype: float32 kernel_constraint: null kernel_initializer: class_name: GlorotUniform config: {dtype: float32, seed: null} kernel_regularizer: null name: dense_18 trainable: true units: 10 use_bias: true name: sequential_3 keras_version: 2.1.6-tf ###Markdown Recreate the model from the YAML: ###Code fresh_model = tf.keras.models.model_from_yaml(yaml_string) ###Output _____no_output_____ ###Markdown Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code. ###Code # Create a trivial model model = tf.keras.Sequential([ layers.Dense(10, activation='softmax', input_shape=(32,)), layers.Dense(10, activation='softmax') ]) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(data, labels, batch_size=32, epochs=5) # Save entire model to a HDF5 file model.save('my_model.h5') # Recreate the exact same model, including weights and optimizer. model = tf.keras.models.load_model('my_model.h5') ###Output Epoch 1/5 1000/1000 [==============================] - 1s 1ms/step - loss: 2.3078 - acc: 0.1020 Epoch 2/5 1000/1000 [==============================] - 0s 126us/step - loss: 2.3049 - acc: 0.1040 Epoch 3/5 1000/1000 [==============================] - 0s 108us/step - loss: 2.3030 - acc: 0.1220 Epoch 4/5 1000/1000 [==============================] - 0s 133us/step - loss: 2.3016 - acc: 0.1250 Epoch 5/5 1000/1000 [==============================] - 0s 146us/step - loss: 2.3008 - acc: 0.1310 ###Markdown Eager execution[Eager execution](./eager.md) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.mdbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating_estimators_from_keras_models). ###Code model = tf.keras.Sequential([layers.Dense(10,activation='softmax'), layers.Dense(10,activation='softmax')]) model.compile(optimizer=tf.train.RMSPropOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy']) estimator = tf.keras.estimator.model_to_estimator(model) ###Output INFO:tensorflow:Using default config. WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp0zfxo0go INFO:tensorflow:Using the Keras model provided. INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp0zfxo0go', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fc39bb6b7b8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} ###Markdown Note: Enable [eager execution](./eager.md) for debugging[Estimator input functions](./premade_estimators.mdcreate_input_functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.contrib.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.contrib.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model: ###Code model = tf.keras.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10,))) model.add(layers.Dense(1, activation='sigmoid')) optimizer = tf.train.GradientDescentOptimizer(0.2) model.compile(loss='binary_crossentropy', optimizer=optimizer) model.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_23 (Dense) (None, 16) 176 _________________________________________________________________ dense_24 (Dense) (None, 1) 17 ================================================================= Total params: 193 Trainable params: 193 Non-trainable params: 0 _________________________________________________________________ ###Markdown Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch. ###Code def input_fn(): x = np.random.random((1024, 10)) y = np.random.randint(2, size=(1024, 1)) x = tf.cast(x, tf.float32) dataset = tf.data.Dataset.from_tensor_slices((x, y)) dataset = dataset.repeat(10) dataset = dataset.batch(32) return dataset ###Output _____no_output_____ ###Markdown Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.contrib.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following: ###Code strategy = tf.contrib.distribute.MirroredStrategy() config = tf.estimator.RunConfig(train_distribute=strategy) ###Output INFO:tensorflow:Initializing RunConfig with distribution strategies. INFO:tensorflow:Not using Distribute Coordinator. ###Markdown Convert the Keras model to a `tf.estimator.Estimator` instance: ###Code keras_estimator = tf.keras.estimator.model_to_estimator( keras_model=model, config=config, model_dir='/tmp/model_dir') ###Output INFO:tensorflow:Using the Keras model provided. INFO:tensorflow:Using config: {'_model_dir': '/tmp/model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7fc38d46a978>, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fc38d46ab70>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None} ###Markdown Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments: ###Code keras_estimator.train(input_fn=input_fn, steps=10) ###Output INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_CPU:0 WARNING:tensorflow:Not all devices in DistributionStrategy are visible to TensorFlow session. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={}) INFO:tensorflow:Warm-starting from: ('/tmp/model_dir/keras/keras_model.ckpt',) INFO:tensorflow:Warm-starting variable: dense_23/kernel; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: dense_23/bias; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: dense_24/kernel; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: dense_24/bias; prev_var_name: Unchanged INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 0 into /tmp/model_dir/model.ckpt. INFO:tensorflow:loss = 0.71503776, step = 0 INFO:tensorflow:Saving checkpoints for 10 into /tmp/model_dir/model.ckpt. INFO:tensorflow:Finalize system. INFO:tensorflow:Loss for final step: 0.6912508. ###Markdown Created by qqgeogor https://www.kaggle.com/qqgeogor https://www.kaggle.com/qqgeogor/keras-based-fm ###Code import numpy as np from sklearn.base import BaseEstimator from keras.layers import Input, Embedding, Dense,Flatten, merge,Activation from keras.models import Model from keras.regularizers import l2 as l2_reg from keras import initializations import itertools def make_batches(size, batch_size): nb_batch = int(np.ceil(size/float(batch_size))) return [(i*batch_size, min(size, (i+1)*batch_size)) for i in range(0, nb_batch)] def batch_generator(X,y,batch_size=128,shuffle=True): sample_size = X[0].shape[0] index_array = np.arange(sample_size) while 1: if shuffle: np.random.shuffle(index_array) batches = make_batches(sample_size, batch_size) for batch_index, (batch_start, batch_end) in enumerate(batches): batch_ids = index_array[batch_start:batch_end] X_batch = [X[i][batch_ids] for i in range(len(X))] y_batch = y[batch_ids] yield X_batch,y_batch def test_batch_generator(X,y,batch_size=128): sample_size = X[0].shape[0] index_array = np.arange(sample_size) batches = make_batches(sample_size, batch_size) for batch_index, (batch_start, batch_end) in enumerate(batches): batch_ids = index_array[batch_start:batch_end] X_batch = [X[i][batch_ids] for i in range(len(X))] y_batch = y[batch_ids] yield X_batch,y_batch def predict_batch(model,X_t,batch_size=128): outcome = [] for X_batch,y_batch in test_batch_generator(X_t,np.zeros(X_t[0].shape[0]),batch_size=batch_size): outcome.append(model.predict(X_batch,batch_size=batch_size)) outcome = np.concatenate(outcome).ravel() return outcome def build_model(max_features,K=8,solver='adam',l2=0.0,l2_fm = 0.0): inputs = [] flatten_layers=[] columns = range(len(max_features)) for c in columns: inputs_c = Input(shape=(1,), dtype='int32',name = 'input_%s'%c) num_c = max_features[c] embed_c = Embedding( num_c, K, input_length=1, name = 'embed_%s'%c, W_regularizer=l2_reg(l2_fm) )(inputs_c) flatten_c = Flatten()(embed_c) inputs.append(inputs_c) flatten_layers.append(flatten_c) fm_layers = [] for emb1,emb2 in itertools.combinations(flatten_layers, 2): dot_layer = merge([emb1,emb2],mode='dot',dot_axes=1) fm_layers.append(dot_layer) for c in columns: num_c = max_features[c] embed_c = Embedding( num_c, 1, input_length=1, name = 'linear_%s'%c, W_regularizer=l2_reg(l2) )(inputs[c]) flatten_c = Flatten()(embed_c) fm_layers.append(flatten_c) flatten = merge(fm_layers,mode='sum') outputs = Activation('sigmoid',name='outputs')(flatten) model = Model(input=inputs, output=outputs) model.compile( optimizer=solver, loss= 'binary_crossentropy' ) return model class KerasFM(BaseEstimator): def __init__(self,max_features=[],K=8,solver='adam',l2=0.0,l2_fm = 0.0): self.model = build_model(max_features,K,solver,l2=l2,l2_fm = l2_fm) def fit(self,X,y,batch_size=128,nb_epoch=10,shuffle=True,verbose=1,validation_data=None): self.model.fit(X,y,batch_size=batch_size,nb_epoch=nb_epoch,shuffle=shuffle,verbose=verbose,validation_data=None) def fit_generator(self,X,y,batch_size=128,nb_epoch=10,shuffle=True,verbose=1,validation_data=None,callbacks=None): tr_gen = batch_generator(X,y,batch_size=batch_size,shuffle=shuffle) if validation_data: X_test,y_test = validation_data te_gen = batch_generator(X_test,y_test,batch_size=batch_size,shuffle=False) nb_val_samples = X_test[-1].shape[0] else: te_gen = None nb_val_samples = None self.model.fit_generator( tr_gen, samples_per_epoch=X[-1].shape[0], nb_epoch=nb_epoch, verbose=verbose, callbacks=callbacks, validation_data=te_gen, nb_val_samples=nb_val_samples, max_q_size=10 ) def predict(self,X,batch_size=128): y_preds = predict_batch(self.model,X,batch_size=batch_size) return y_preds ###Output _____no_output_____ ###Markdown 1) Data Access ###Code #Read Training data and convert into Lab space #To convert images to 256x256 use this command on the bash #for i in *.jpg; do convert $i -scale 256x256 -gravity center -background white -extent 256x256 resized/f$i; don #Read Training data X = [] for filename in os.listdir('mixed/'): X.append(img_to_array(load_img('mixed/'+filename))) X = np.array(X, dtype=float) Xtrain = 1.0/255*X #Convert into Lab space for i in np.arange(len(Xtrain)): Xtrain[i] = rgb2lab(Xtrain[i]) Xtrain[i] = (Xtrain[i] + [0, 128, 128]) / [100, 255, 255] Xtrain.shape #Separate X (Lightness) and Y(ab images) Y_train = Xtrain[:,:,:,1:] X_train = Xtrain[:,:,:,0] X_train=X_train.reshape((X_train.shape[0],X_train.shape[1],X_train.shape[2],1)) #Read Testing data X_test = [] for filename in os.listdir('test'): X_test.append(img_to_array(load_img('test/'+filename))/255.) X_test = np.array(X_test) ###Output _____no_output_____ ###Markdown 2) Model Building ###Code #Encoder encoder_input = Input(shape=(256, 256, 1,)) encoder_output = Conv2D(64, (3,3), activation='relu', padding='same', strides=2)(encoder_input) encoder_output = Conv2D(128, (3,3), activation='relu', padding='same')(encoder_output) encoder_output = Conv2D(128, (3,3), activation='relu', padding='same', strides=2)(encoder_output) encoder_output = Conv2D(256, (3,3), activation='relu', padding='same')(encoder_output) encoder_output = Conv2D(256, (3,3), activation='relu', padding='same', strides=2)(encoder_output) #Decoder decoder_output = Conv2D(128, (3,3), activation='relu', padding='same')(encoder_output) decoder_output = UpSampling2D((2, 2))(decoder_output) decoder_output = Conv2D(64, (3,3), activation='relu', padding='same')(decoder_output) decoder_output = UpSampling2D((2, 2))(decoder_output) decoder_output = Conv2D(32, (3,3), activation='relu', padding='same')(decoder_output) decoder_output = Conv2D(16, (3,3), activation='relu', padding='same')(decoder_output) decoder_output = Conv2D(2, (3, 3), activation='tanh', padding='same')(decoder_output) decoder_output = UpSampling2D((2, 2))(decoder_output) model = Model(inputs=encoder_input, outputs=decoder_output) model.summary() batch_size = 20 epochs = 30 #Train model #tensorboard = TensorBoard(log_dir="./output") start_time = time.time() model.compile(optimizer='adam', loss='mse') history = model.fit(x=X_train,y=Y_train,batch_size=batch_size,epochs=epochs, validation_split=0.10,shuffle=True) print('Finished Training') print("--- %s seconds ---" % (time.time() - start_time)) def plot_curves(history): loss = history.history['loss'] val_loss=history.history['val_loss'] epochs = range(1, len(loss)+1) with plt.style.context('fivethirtyeight'): plt.figure(figsize=(20,10)) plt.plot(epochs, loss, label='Training loss') plt.plot(epochs, val_loss, label='Validation loss') plt.legend(frameon=False) plt.show() plot_curves(history) def colour_image(image_data): input_img = rgb2lab(image_data) #input_img = rgb2gray(image_data) input_img = (input_img + [0, 128, 128]) / [100, 255, 255] input_img = input_img[:,:,0] input_img = input_img.reshape((1,256,256,1)) imgout = model.predict(input_img) coloured = np.zeros((256,256,3)) coloured[:,:,0] = input_img.reshape((256,256)) coloured[:,:,1:] = imgout coloured = (coloured * [100, 255, 255]) - [0, 128, 128] rgb_image = lab2rgb(coloured) plt.figure(figsize=(20,10)) plt.subplot(121) plt.imshow(rgb_image) plt.subplot(132) plt.imshow(image_data) plt.subplot(133) plt.imshow(rgb_image) for i in np.arange(20,30): image_number=i colour_image(X_test[image_number]) ###Output _____no_output_____ ###Markdown Tutorial de KerasKeras es una API de alto nivel para definir y entrenar modelos de redes neuronales totalmente compatible con tensorflow. Viene incluido dentro de tensorflow como el módulo **tf.keras**. Keras abstrae el diseño de una red neuronal a partir de las capas que la componen. Toda la mecánica de descenso de gradiente, retropropagación y optimizadores se maneja internamente, por lo que el desarrollador solamente debe preocuparse por definir bien la arquitectura y los hiperparámetros. Clasificador de imágenes usando KerasEn este notebook, se construirá una red neuronal feed forward densamente conectada para clasificar datos de dígitos manuscritos.El dataset usado es el MNIST, un dataset bastante conocido en el mundo de la clasificación de imágenes y es considerado como el hola mundo de las redes neuronales para visión artificial. ###Code # importar el modulo de tensorflow (tensorflow >= 2.0) import tensorflow as tf # importar el dataset mnist = tf.keras.datasets.mnist # extraer el dataset en tensores (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 print(f'entrenamiento: {x_train.shape}') print(f'pruebas: {x_test.shape}') ###Output entrenamiento: (60000, 28, 28) pruebas: (10000, 28, 28) ###Markdown Modelo Secuencial (Sequential)En Keras, una red neuronal feed-forward se define usando el modelo secuencial. En este modelo, se asume que las capas de la red neuronal vienen una después de otra y se van apilando a medida que se agregan nuevas capas. El modelo secuencial puede ser aplicado en una inmensidad de aplicaciones.Para construir el modelo necesitamos crear una instancia del objeto **Sequential** y agregar las distintas capas que lo componen. ###Code # crear el modelo y agregar las capas model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), # dense es una capa densamente conectada WX + b tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) # la salida de la red es una capa lineal, tambien denominados logits. predictions = model(x_train[:1]).numpy() predictions # para convertir estos logits en una salida softmax aplicamos un operador de tensorflow tf.nn.softmax(predictions).numpy() # print(tf.argmax(predictions[0])) ###Output _____no_output_____ ###Markdown Función de costoKeras nos ofrece a disposición distintos tipos de funciones de costo incorporados con los cuales podemos trabajar directamente. ###Code # uilizando la entropia cruzada categorica loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) # podemos hallar el costo de las predicciones loss_fn(y_train[:1], predictions).numpy() ###Output _____no_output_____ ###Markdown Compilacion del modeloUna vez definida la arquitectura y función de costo se procede a *compilar* el modelo, lo cual transforma nuestro objeto en un grafo de cómputo de tensorflow. ###Code model.compile(optimizer='adam', # algoritmo optimizador (descenso de gradiente) loss=loss_fn, # funcion de costo metrics=['accuracy']) # metricas para monitorear model.summary() ###Output Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten_1 (Flatten) (None, 784) 0 _________________________________________________________________ dense_2 (Dense) (None, 128) 100480 _________________________________________________________________ dropout_1 (Dropout) (None, 128) 0 _________________________________________________________________ dense_3 (Dense) (None, 10) 1290 ================================================================= Total params: 101,770 Trainable params: 101,770 Non-trainable params: 0 _________________________________________________________________ ###Markdown Entrenamiento del modeloPara entrenar el modelo, una vez definida la arquitectura y los demás parámetros, simplemente se debe ejecutar el método fit sobre nuestro modelo. ###Code model.fit(x_train, y_train, epochs=5) ###Output Epoch 1/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.2973 - accuracy: 0.9140 Epoch 2/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.1406 - accuracy: 0.9585 Epoch 3/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.1055 - accuracy: 0.9690 Epoch 4/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.0877 - accuracy: 0.9728 Epoch 5/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.0728 - accuracy: 0.9771 ###Markdown Evaluación del modeloTambién existe una función implementada para evaluar el modelo sobre un conjunto de pruebas ###Code model.evaluate(x_test, y_test, verbose=2) ###Output 313/313 - 0s - loss: 0.0675 - accuracy: 0.9787 ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Distributed training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThe `tf.distribute.Strategy` API provides an abstraction for distributing your training across multiple processing units. It allows you to carry out distributed training using existing models and training code with minimal changes.This tutorial demonstrates how to use the `tf.distribute.MirroredStrategy` to perform in-graph replication with _synchronous training on many GPUs on one machine_. The strategy essentially copies all of the model's variables to each processor. Then, it uses [all-reduce](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/) to combine the gradients from all processors, and applies the combined value to all copies of the model.You will use the `tf.keras` APIs to build the model and `Model.fit` for training it. (To learn about distributed training with a custom training loop and the `MirroredStrategy`, check out [this tutorial](custom_training.ipynb).)`MirroredStrategy` trains your model on multiple GPUs on a single machine. For _synchronous training on many GPUs on multiple workers_, use the `tf.distribute.MultiWorkerMirroredStrategy` [with the Keras Model.fit](multi_worker_with_keras.ipynb) or [a custom training loop](multi_worker_with_ctl.ipynb). For other options, refer to the [Distributed training guide](../../guide/distributed_training.ipynb).To learn about various other strategies, there is the [Distributed training with TensorFlow](../../guide/distributed_training.ipynb) guide. Setup ###Code import tensorflow_datasets as tfds import tensorflow as tf import os # Load the TensorBoard notebook extension. %load_ext tensorboard print(tf.__version__) ###Output 2.5.0 ###Markdown Download the dataset Load the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). This returns a dataset in the `tf.data` format.Setting the `with_info` argument to `True` includes the metadata for the entire dataset, which is being saved here to `info`. Among other things, this metadata object includes the number of train and test examples. ###Code datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test'] ###Output 2021-08-04 01:25:00.048530: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1 2021-08-04 01:25:00.691099: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:00.691993: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: pciBusID: 0000:00:05.0 name: Tesla V100-SXM2-16GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s 2021-08-04 01:25:00.692033: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-08-04 01:25:00.695439: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11 2021-08-04 01:25:00.695536: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11 2021-08-04 01:25:00.696685: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10 2021-08-04 01:25:00.697009: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10 2021-08-04 01:25:00.698067: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11 2021-08-04 01:25:00.698998: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11 2021-08-04 01:25:00.699164: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8 2021-08-04 01:25:00.699264: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:00.700264: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:00.701157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0 2021-08-04 01:25:00.701928: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-08-04 01:25:00.702642: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:00.703535: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: pciBusID: 0000:00:05.0 name: Tesla V100-SXM2-16GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s 2021-08-04 01:25:00.703621: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:00.704507: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:00.705349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0 2021-08-04 01:25:00.705388: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-08-04 01:25:01.356483: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-08-04 01:25:01.356521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0 2021-08-04 01:25:01.356530: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N 2021-08-04 01:25:01.356777: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:01.357792: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:01.358756: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-08-04 01:25:01.359641: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0) ###Markdown Define the distribution strategy Create a `MirroredStrategy` object. This will handle distribution and provide a context manager (`MirroredStrategy.scope`) to build your model inside. ###Code strategy = tf.distribute.MirroredStrategy() print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) ###Output Number of devices: 1 ###Markdown Set up the input pipeline When training a model with multiple GPUs, you can use the extra computing power effectively by increasing the batch size. In general, use the largest batch size that fits the GPU memory and tune the learning rate accordingly. ###Code # You can also do info.splits.total_num_examples to get the total # number of examples in the dataset. num_train_examples = info.splits['train'].num_examples num_test_examples = info.splits['test'].num_examples BUFFER_SIZE = 10000 BATCH_SIZE_PER_REPLICA = 64 BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync ###Output _____no_output_____ ###Markdown Define a function that normalizes the image pixel values from the `[0, 255]` range to the `[0, 1]` range ([feature scaling](https://en.wikipedia.org/wiki/Feature_scaling)): ###Code def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label ###Output _____no_output_____ ###Markdown Apply this `scale` function to the training and test data, and then use the `tf.data.Dataset` APIs to shuffle the training data (`Dataset.shuffle`), and batch it (`Dataset.batch`). Notice that you are also keeping an in-memory cache of the training data to improve performance (`Dataset.cache`). ###Code train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE) eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE) ###Output _____no_output_____ ###Markdown Create the model Create and compile the Keras model in the context of `Strategy.scope`: ###Code with strategy.scope(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10) ]) model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) ###Output INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). ###Markdown Define the callbacks Define the following `tf.keras.callbacks`:- `tf.keras.callbacks.TensorBoard`: writes a log for TensorBoard, which allows you to visualize the graphs.- `tf.keras.callbacks.ModelCheckpoint`: saves the model at a certain frequency, such as after every epoch.- `tf.keras.callbacks.LearningRateScheduler`: schedules the learning rate to change after, for example, every epoch/batch.For illustrative purposes, add a custom callback called `PrintLR` to display the *learning rate* in the notebook. ###Code # Define the checkpoint directory to store the checkpoints. checkpoint_dir = './training_checkpoints' # Define the name of the checkpoint files. checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") # Define a function for decaying the learning rate. # You can define any decay function you need. def decay(epoch): if epoch < 3: return 1e-3 elif epoch >= 3 and epoch < 7: return 1e-4 else: return 1e-5 # Define a callback for printing the learning rate at the end of each epoch. class PrintLR(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): print('\nLearning rate for epoch {} is {}'.format(epoch + 1, model.optimizer.lr.numpy())) # Put all the callbacks together. callbacks = [ tf.keras.callbacks.TensorBoard(log_dir='./logs'), tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix, save_weights_only=True), tf.keras.callbacks.LearningRateScheduler(decay), PrintLR() ] ###Output 2021-08-04 01:25:02.054144: I tensorflow/core/profiler/lib/profiler_session.cc:126] Profiler session initializing. 2021-08-04 01:25:02.054179: I tensorflow/core/profiler/lib/profiler_session.cc:141] Profiler session started. 2021-08-04 01:25:02.054232: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1611] Profiler found 1 GPUs 2021-08-04 01:25:02.098001: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcupti.so.11.2 ###Markdown Train and evaluate Now, train the model in the usual way by calling `Model.fit` on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not. ###Code EPOCHS = 12 model.fit(train_dataset, epochs=EPOCHS, callbacks=callbacks) ###Output 2021-08-04 01:25:02.342811: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:461] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed. 2021-08-04 01:25:02.389307: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2) 2021-08-04 01:25:02.389734: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2000179999 Hz ###Markdown Check for saved checkpoints: ###Code # Check the checkpoint directory. !ls {checkpoint_dir} ###Output checkpoint ckpt_4.data-00000-of-00001 ckpt_1.data-00000-of-00001 ckpt_4.index ckpt_1.index ckpt_5.data-00000-of-00001 ckpt_10.data-00000-of-00001 ckpt_5.index ckpt_10.index ckpt_6.data-00000-of-00001 ckpt_11.data-00000-of-00001 ckpt_6.index ckpt_11.index ckpt_7.data-00000-of-00001 ckpt_12.data-00000-of-00001 ckpt_7.index ckpt_12.index ckpt_8.data-00000-of-00001 ckpt_2.data-00000-of-00001 ckpt_8.index ckpt_2.index ckpt_9.data-00000-of-00001 ckpt_3.data-00000-of-00001 ckpt_9.index ckpt_3.index ###Markdown To check how well the model performs, load the latest checkpoint and call `Model.evaluate` on the test data: ###Code model.load_weights(tf.train.latest_checkpoint(checkpoint_dir)) eval_loss, eval_acc = model.evaluate(eval_dataset) print('Eval loss: {}, Eval accuracy: {}'.format(eval_loss, eval_acc)) ###Output 2021-08-04 01:25:49.277864: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:461] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed. ###Markdown To visualize the output, launch TensorBoard and view the logs: ###Code %tensorboard --logdir=logs ###Output _____no_output_____ ###Markdown --> ###Code !ls -sh ./logs ###Output total 4.0K 4.0K train ###Markdown Export to SavedModel Export the graph and the variables to the platform-agnostic SavedModel format using `Model.save`. After your model is saved, you can load it with or without the `Strategy.scope`. ###Code path = 'saved_model/' model.save(path, save_format='tf') ###Output 2021-08-04 01:25:51.983973: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. ###Markdown Now, load the model without `Strategy.scope`: ###Code unreplicated_model = tf.keras.models.load_model(path) unreplicated_model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset) print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc)) ###Output 1/157 [..............................] - ETA: 28s - loss: 0.0786 - accuracy: 0.9688 ###Markdown Load the model with `Strategy.scope`: ###Code with strategy.scope(): replicated_model = tf.keras.models.load_model(path) replicated_model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) eval_loss, eval_acc = replicated_model.evaluate(eval_dataset) print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc)) ###Output 2021-08-04 01:25:53.544239: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:461] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed. ###Markdown 2048 Keras ###Code import keras from keras.models import Model from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, Input, concatenate, BatchNormalization, Activation from keras.optimizers import Adadelta import numpy as np BATCH_SIZE = 128 NUM_EPOCHS = 15 inputs = Input((4,4,16)) conv = inputs FILTERS = 128 conv41 = Conv2D(filters=FILTERS, kernel_size=(4,1), kernel_initializer='he_uniform')(conv) conv14 = Conv2D(filters=FILTERS, kernel_size=(1,4), kernel_initializer='he_uniform')(conv) conv22 = Conv2D(filters=FILTERS, kernel_size=(2,2), kernel_initializer='he_uniform')(conv) conv33 = Conv2D(filters=FILTERS, kernel_size=(3,3), kernel_initializer='he_uniform')(conv) conv44 = Conv2D(filters=FILTERS, kernel_size=(4,4), kernel_initializer='he_uniform')(conv) hidden = concatenate([Flatten()(conv41), Flatten()(conv14), Flatten()(conv22), Flatten()(conv33), Flatten()(conv44)]) x = BatchNormalization()(hidden) x = Activation('relu')(hidden) for width in [512,128]: x = Dense(width,kernel_initializer='he_uniform')(x) x = BatchNormalization()(x) x = Activation('relu')(x) outputs = Dense(4,activation='softmax')(x) model = Model(inputs, outputs) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) OUT_SHAPE = (4,4) CAND = 16 #map_table = {2**i : i for i in range(1,CAND)} #map_table[0] = 0 def grid_one(arr): ret = np.zeros(shape=OUT_SHAPE+(CAND,),dtype=bool) # shape = (4,4,16) for r in range(OUT_SHAPE[0]): for c in range(OUT_SHAPE[1]): ret[r,c,arr[r,c]] = 1 return ret import csv data_raw = [] with open("train/train1M_1.csv") as f: for line in f: piece = eval(line) data_raw.append(piece) len(data_raw) ###Output _____no_output_____ ###Markdown import json with open("train7.json",'w') as f: json.dump(data7.tolist(),f) ###Code import json data7 = [] with open("train7.json",'r') as f: data7 = json.load(f) data7 = np.array(data7) data = data7 data7.shape data = np.array(data_raw) data.shape x = np.array([ grid_one(piece[:-1].reshape(4,4)) for piece in data ]) y = keras.utils.to_categorical(data[:,-1], 4) sep = 9000 x_train = x[:sep] x_test = x[sep:] y_train = y[:sep] y_test = y[sep:] x_test.shape # train , validation_data=(x_test,y_test) model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=1, verbose=1) # evaluate #score_train = model.evaluate(x_train,y_train,verbose=0) #print('Training loss: %.4f, Training accuracy: %.2f%%' % (score_train[0],score_train[1])) score_test = model.evaluate(x_test,y_test,verbose=0) print('Testing loss: %.4f, Testing accuracy: %.2f' % (score_test[0],score_test[1])) model.save('model_k.h5') # creates a HDF5 file 'my_model.h5' # returns a compiled model # identical to the previous one model = keras.models.load_model('model_k.h5') ###Output _____no_output_____ ###Markdown keras 為 High Level DNN Design Tool Language ###Code from keras.models import Sequential model = Sequential() from keras.layers import Dense, Activation model.add(Dense(output_dim=64, input_dim=100)) model.add(Activation("sigmoid")) model.add(Dense(output_dim=10)) model.add(Activation("softmax")) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) from IPython.display import SVG from keras.utils.visualize_util import model_to_dot SVG(model_to_dot(model).create(prog='dot', format='svg')) import numpy as np X_train = np.random.uniform(low=0, high=100, size=[100,100]) print X_train.shape Y_train = np.matrix([[1]+[0]*9]*100) print Y_train.shape model.fit(X_train, Y_train, nb_epoch=1000, batch_size=20) ###Output Epoch 1/1000 100/100 [==============================] - 0s - loss: 1.1566 - acc: 0.8300 Epoch 2/1000 100/100 [==============================] - 0s - loss: 0.4642 - acc: 1.0000 Epoch 3/1000 100/100 [==============================] - 0s - loss: 0.2602 - acc: 1.0000 Epoch 4/1000 100/100 [==============================] - 0s - loss: 0.1816 - acc: 1.0000 Epoch 5/1000 100/100 [==============================] - 0s - loss: 0.1391 - acc: 1.0000 Epoch 6/1000 100/100 [==============================] - 0s - loss: 0.1117 - acc: 1.0000 Epoch 7/1000 100/100 [==============================] - 0s - loss: 0.0932 - acc: 1.0000 Epoch 8/1000 100/100 [==============================] - 0s - loss: 0.0778 - acc: 1.0000 Epoch 9/1000 100/100 [==============================] - 0s - loss: 0.0679 - acc: 1.0000 Epoch 10/1000 100/100 [==============================] - 0s - loss: 0.0610 - acc: 1.0000 Epoch 11/1000 100/100 [==============================] - 0s - loss: 0.0554 - acc: 1.0000 Epoch 12/1000 100/100 [==============================] - 0s - loss: 0.0510 - acc: 1.0000 Epoch 13/1000 100/100 [==============================] - 0s - loss: 0.0471 - acc: 1.0000 Epoch 14/1000 100/100 [==============================] - 0s - loss: 0.0434 - acc: 1.0000 Epoch 15/1000 100/100 [==============================] - 0s - loss: 0.0399 - acc: 1.0000 Epoch 16/1000 100/100 [==============================] - 0s - loss: 0.0371 - acc: 1.0000 Epoch 17/1000 100/100 [==============================] - 0s - loss: 0.0349 - acc: 1.0000 Epoch 18/1000 100/100 [==============================] - 0s - loss: 0.0331 - acc: 1.0000 Epoch 19/1000 100/100 [==============================] - 0s - loss: 0.0314 - acc: 1.0000 Epoch 20/1000 100/100 [==============================] - 0s - loss: 0.0298 - acc: 1.0000 Epoch 21/1000 100/100 [==============================] - 0s - loss: 0.0283 - acc: 1.0000 Epoch 22/1000 100/100 [==============================] - 0s - loss: 0.0268 - acc: 1.0000 Epoch 23/1000 100/100 [==============================] - 0s - loss: 0.0253 - acc: 1.0000 Epoch 24/1000 100/100 [==============================] - 0s - loss: 0.0242 - acc: 1.0000 Epoch 25/1000 100/100 [==============================] - 0s - loss: 0.0232 - acc: 1.0000 Epoch 26/1000 100/100 [==============================] - 0s - loss: 0.0223 - acc: 1.0000 Epoch 27/1000 100/100 [==============================] - 0s - loss: 0.0215 - acc: 1.0000 Epoch 28/1000 100/100 [==============================] - 0s - loss: 0.0207 - acc: 1.0000 Epoch 29/1000 100/100 [==============================] - 0s - loss: 0.0200 - acc: 1.0000 Epoch 30/1000 100/100 [==============================] - 0s - loss: 0.0193 - acc: 1.0000 Epoch 31/1000 100/100 [==============================] - 0s - loss: 0.0187 - acc: 1.0000 Epoch 32/1000 100/100 [==============================] - 0s - loss: 0.0181 - acc: 1.0000 Epoch 33/1000 100/100 [==============================] - 0s - loss: 0.0176 - acc: 1.0000 Epoch 34/1000 100/100 [==============================] - 0s - loss: 0.0170 - acc: 1.0000 Epoch 35/1000 100/100 [==============================] - 0s - loss: 0.0165 - acc: 1.0000 Epoch 36/1000 100/100 [==============================] - 0s - loss: 0.0160 - acc: 1.0000 Epoch 37/1000 100/100 [==============================] - 0s - loss: 0.0155 - acc: 1.0000 Epoch 38/1000 100/100 [==============================] - 0s - loss: 0.0150 - acc: 1.0000 Epoch 39/1000 100/100 [==============================] - 0s - loss: 0.0146 - acc: 1.0000 Epoch 40/1000 100/100 [==============================] - 0s - loss: 0.0142 - acc: 1.0000 Epoch 41/1000 100/100 [==============================] - 0s - loss: 0.0138 - acc: 1.0000 Epoch 42/1000 100/100 [==============================] - 0s - loss: 0.0134 - acc: 1.0000 Epoch 43/1000 100/100 [==============================] - 0s - loss: 0.0131 - acc: 1.0000 Epoch 44/1000 100/100 [==============================] - 0s - loss: 0.0127 - acc: 1.0000 Epoch 45/1000 100/100 [==============================] - 0s - loss: 0.0124 - acc: 1.0000 Epoch 46/1000 100/100 [==============================] - 0s - loss: 0.0121 - acc: 1.0000 Epoch 47/1000 100/100 [==============================] - 0s - loss: 0.0118 - acc: 1.0000 Epoch 48/1000 100/100 [==============================] - 0s - loss: 0.0115 - acc: 1.0000 Epoch 49/1000 100/100 [==============================] - 0s - loss: 0.0113 - acc: 1.0000 Epoch 50/1000 100/100 [==============================] - 0s - loss: 0.0110 - acc: 1.0000 Epoch 51/1000 100/100 [==============================] - 0s - loss: 0.0108 - acc: 1.0000 Epoch 52/1000 100/100 [==============================] - 0s - loss: 0.0106 - acc: 1.0000 Epoch 53/1000 100/100 [==============================] - 0s - loss: 0.0104 - acc: 1.0000 Epoch 54/1000 100/100 [==============================] - 0s - loss: 0.0101 - acc: 1.0000 Epoch 55/1000 100/100 [==============================] - 0s - loss: 0.0099 - acc: 1.0000 Epoch 56/1000 100/100 [==============================] - 0s - loss: 0.0097 - acc: 1.0000 Epoch 57/1000 100/100 [==============================] - 0s - loss: 0.0096 - acc: 1.0000 Epoch 58/1000 100/100 [==============================] - 0s - loss: 0.0094 - acc: 1.0000 Epoch 59/1000 100/100 [==============================] - 0s - loss: 0.0092 - acc: 1.0000 Epoch 60/1000 100/100 [==============================] - 0s - loss: 0.0091 - acc: 1.0000 Epoch 61/1000 100/100 [==============================] - 0s - loss: 0.0089 - acc: 1.0000 Epoch 62/1000 100/100 [==============================] - 0s - loss: 0.0088 - acc: 1.0000 Epoch 63/1000 100/100 [==============================] - 0s - loss: 0.0086 - acc: 1.0000 Epoch 64/1000 100/100 [==============================] - 0s - loss: 0.0085 - acc: 1.0000 Epoch 65/1000 100/100 [==============================] - 0s - loss: 0.0084 - acc: 1.0000 Epoch 66/1000 100/100 [==============================] - 0s - loss: 0.0082 - acc: 1.0000 Epoch 67/1000 100/100 [==============================] - 0s - loss: 0.0081 - acc: 1.0000 Epoch 68/1000 100/100 [==============================] - 0s - loss: 0.0080 - acc: 1.0000 Epoch 69/1000 100/100 [==============================] - 0s - loss: 0.0079 - acc: 1.0000 Epoch 70/1000 100/100 [==============================] - 0s - loss: 0.0078 - acc: 1.0000 Epoch 71/1000 100/100 [==============================] - 0s - loss: 0.0077 - acc: 1.0000 Epoch 72/1000 100/100 [==============================] - 0s - loss: 0.0076 - acc: 1.0000 Epoch 73/1000 100/100 [==============================] - 0s - loss: 0.0075 - acc: 1.0000 Epoch 74/1000 100/100 [==============================] - 0s - loss: 0.0074 - acc: 1.0000 Epoch 75/1000 100/100 [==============================] - 0s - loss: 0.0073 - acc: 1.0000 Epoch 76/1000 100/100 [==============================] - 0s - loss: 0.0072 - acc: 1.0000 Epoch 77/1000 100/100 [==============================] - 0s - loss: 0.0071 - acc: 1.0000 Epoch 78/1000 100/100 [==============================] - 0s - loss: 0.0070 - acc: 1.0000 Epoch 79/1000 100/100 [==============================] - 0s - loss: 0.0069 - acc: 1.0000 Epoch 80/1000 100/100 [==============================] - 0s - loss: 0.0068 - acc: 1.0000 Epoch 81/1000 100/100 [==============================] - 0s - loss: 0.0067 - acc: 1.0000 Epoch 82/1000 100/100 [==============================] - 0s - loss: 0.0067 - acc: 1.0000 Epoch 83/1000 100/100 [==============================] - 0s - loss: 0.0066 - acc: 1.0000 Epoch 84/1000 100/100 [==============================] - 0s - loss: 0.0065 - acc: 1.0000 Epoch 85/1000 100/100 [==============================] - 0s - loss: 0.0064 - acc: 1.0000 Epoch 86/1000 100/100 [==============================] - 0s - loss: 0.0063 - acc: 1.0000 Epoch 87/1000 100/100 [==============================] - 0s - loss: 0.0063 - acc: 1.0000 Epoch 88/1000 100/100 [==============================] - 0s - loss: 0.0062 - acc: 1.0000 Epoch 89/1000 100/100 [==============================] - 0s - loss: 0.0061 - acc: 1.0000 Epoch 90/1000 100/100 [==============================] - 0s - loss: 0.0060 - acc: 1.0000 Epoch 91/1000 100/100 [==============================] - 0s - loss: 0.0060 - acc: 1.0000 Epoch 92/1000 100/100 [==============================] - 0s - loss: 0.0059 - acc: 1.0000 Epoch 93/1000 100/100 [==============================] - 0s - loss: 0.0058 - acc: 1.0000 Epoch 94/1000 100/100 [==============================] - 0s - loss: 0.0057 - acc: 1.0000 Epoch 95/1000 100/100 [==============================] - 0s - loss: 0.0057 - acc: 1.0000 Epoch 96/1000 100/100 [==============================] - 0s - loss: 0.0056 - acc: 1.0000 Epoch 97/1000 100/100 [==============================] - 0s - loss: 0.0055 - acc: 1.0000 Epoch 98/1000 100/100 [==============================] - 0s - loss: 0.0054 - acc: 1.0000 Epoch 99/1000 100/100 [==============================] - 0s - loss: 0.0054 - acc: 1.0000 Epoch 100/1000 100/100 [==============================] - 0s - loss: 0.0053 - acc: 1.0000 Epoch 101/1000 100/100 [==============================] - 0s - loss: 0.0052 - acc: 1.0000 Epoch 102/1000 100/100 [==============================] - 0s - loss: 0.0052 - acc: 1.0000 Epoch 103/1000 100/100 [==============================] - 0s - loss: 0.0051 - acc: 1.0000 Epoch 104/1000 100/100 [==============================] - 0s - loss: 0.0051 - acc: 1.0000 Epoch 105/1000 100/100 [==============================] - 0s - loss: 0.0050 - acc: 1.0000 Epoch 106/1000 100/100 [==============================] - 0s - loss: 0.0050 - acc: 1.0000 Epoch 107/1000 100/100 [==============================] - 0s - loss: 0.0049 - acc: 1.0000 Epoch 108/1000 100/100 [==============================] - 0s - loss: 0.0049 - acc: 1.0000 Epoch 109/1000 100/100 [==============================] - 0s - loss: 0.0048 - acc: 1.0000 Epoch 110/1000 100/100 [==============================] - 0s - loss: 0.0048 - acc: 1.0000 Epoch 111/1000 100/100 [==============================] - 0s - loss: 0.0047 - acc: 1.0000 Epoch 112/1000 100/100 [==============================] - 0s - loss: 0.0047 - acc: 1.0000 Epoch 113/1000 100/100 [==============================] - 0s - loss: 0.0047 - acc: 1.0000 Epoch 114/1000 100/100 [==============================] - 0s - loss: 0.0046 - acc: 1.0000 Epoch 115/1000 100/100 [==============================] - 0s - loss: 0.0046 - acc: 1.0000 Epoch 116/1000 100/100 [==============================] - 0s - loss: 0.0045 - acc: 1.0000 Epoch 117/1000 100/100 [==============================] - 0s - loss: 0.0045 - acc: 1.0000 Epoch 118/1000 100/100 [==============================] - 0s - loss: 0.0045 - acc: 1.0000 Epoch 119/1000 100/100 [==============================] - 0s - loss: 0.0044 - acc: 1.0000 Epoch 120/1000 100/100 [==============================] - 0s - loss: 0.0044 - acc: 1.0000 Epoch 121/1000 100/100 [==============================] - 0s - loss: 0.0044 - acc: 1.0000 Epoch 122/1000 100/100 [==============================] - 0s - loss: 0.0043 - acc: 1.0000 Epoch 123/1000 100/100 [==============================] - 0s - loss: 0.0043 - acc: 1.0000 Epoch 124/1000 100/100 [==============================] - 0s - loss: 0.0043 - acc: 1.0000 Epoch 125/1000 100/100 [==============================] - 0s - loss: 0.0042 - acc: 1.0000 Epoch 126/1000 100/100 [==============================] - 0s - loss: 0.0042 - acc: 1.0000 Epoch 127/1000 100/100 [==============================] - 0s - loss: 0.0042 - acc: 1.0000 Epoch 128/1000 100/100 [==============================] - 0s - loss: 0.0041 - acc: 1.0000 Epoch 129/1000 100/100 [==============================] - 0s - loss: 0.0041 - acc: 1.0000 Epoch 130/1000 100/100 [==============================] - 0s - loss: 0.0041 - acc: 1.0000 Epoch 131/1000 100/100 [==============================] - 0s - loss: 0.0040 - acc: 1.0000 Epoch 132/1000 100/100 [==============================] - 0s - loss: 0.0040 - acc: 1.0000 Epoch 133/1000 100/100 [==============================] - 0s - loss: 0.0040 - acc: 1.0000 Epoch 134/1000 100/100 [==============================] - 0s - loss: 0.0039 - acc: 1.0000 Epoch 135/1000 100/100 [==============================] - 0s - loss: 0.0039 - acc: 1.0000 Epoch 136/1000 100/100 [==============================] - 0s - loss: 0.0039 - acc: 1.0000 Epoch 137/1000 100/100 [==============================] - 0s - loss: 0.0039 - acc: 1.0000 Epoch 138/1000 100/100 [==============================] - 0s - loss: 0.0038 - acc: 1.0000 Epoch 139/1000 100/100 [==============================] - 0s - loss: 0.0038 - acc: 1.0000 Epoch 140/1000 100/100 [==============================] - 0s - loss: 0.0038 - acc: 1.0000 Epoch 141/1000 100/100 [==============================] - 0s - loss: 0.0038 - acc: 1.0000 Epoch 142/1000 100/100 [==============================] - 0s - loss: 0.0037 - acc: 1.0000 Epoch 143/1000 100/100 [==============================] - 0s - loss: 0.0037 - acc: 1.0000 Epoch 144/1000 100/100 [==============================] - 0s - loss: 0.0037 - acc: 1.0000 Epoch 145/1000 100/100 [==============================] - 0s - loss: 0.0037 - acc: 1.0000 Epoch 146/1000 100/100 [==============================] - 0s - loss: 0.0036 - acc: 1.0000 Epoch 147/1000 100/100 [==============================] - 0s - loss: 0.0036 - acc: 1.0000 Epoch 148/1000 100/100 [==============================] - 0s - loss: 0.0036 - acc: 1.0000 Epoch 149/1000 100/100 [==============================] - 0s - loss: 0.0036 - acc: 1.0000 Epoch 150/1000 100/100 [==============================] - 0s - loss: 0.0035 - acc: 1.0000 Epoch 151/1000 100/100 [==============================] - 0s - loss: 0.0035 - acc: 1.0000 Epoch 152/1000 100/100 [==============================] - 0s - loss: 0.0035 - acc: 1.0000 Epoch 153/1000 100/100 [==============================] - 0s - loss: 0.0035 - acc: 1.0000 Epoch 154/1000 100/100 [==============================] - 0s - loss: 0.0035 - acc: 1.0000 Epoch 155/1000 100/100 [==============================] - 0s - loss: 0.0034 - acc: 1.0000 Epoch 156/1000 100/100 [==============================] - 0s - loss: 0.0034 - acc: 1.0000 Epoch 157/1000 100/100 [==============================] - 0s - loss: 0.0034 - acc: 1.0000 Epoch 158/1000 100/100 [==============================] - 0s - loss: 0.0034 - acc: 1.0000 Epoch 159/1000 100/100 [==============================] - 0s - loss: 0.0034 - acc: 1.0000 Epoch 160/1000 100/100 [==============================] - 0s - loss: 0.0033 - acc: 1.0000 Epoch 161/1000 100/100 [==============================] - 0s - loss: 0.0033 - acc: 1.0000 Epoch 162/1000 100/100 [==============================] - 0s - loss: 0.0033 - acc: 1.0000 Epoch 163/1000 100/100 [==============================] - 0s - loss: 0.0033 - acc: 1.0000 Epoch 164/1000 100/100 [==============================] - 0s - loss: 0.0033 - acc: 1.0000 Epoch 165/1000 100/100 [==============================] - 0s - loss: 0.0032 - acc: 1.0000 Epoch 166/1000 100/100 [==============================] - 0s - loss: 0.0032 - acc: 1.0000 Epoch 167/1000 100/100 [==============================] - 0s - loss: 0.0032 - acc: 1.0000 Epoch 168/1000 100/100 [==============================] - 0s - loss: 0.0032 - acc: 1.0000 Epoch 169/1000 100/100 [==============================] - 0s - loss: 0.0032 - acc: 1.0000 Epoch 170/1000 100/100 [==============================] - 0s - loss: 0.0031 - acc: 1.0000 Epoch 171/1000 100/100 [==============================] - 0s - loss: 0.0031 - acc: 1.0000 Epoch 172/1000 100/100 [==============================] - 0s - loss: 0.0031 - acc: 1.0000 Epoch 173/1000 100/100 [==============================] - 0s - loss: 0.0031 - acc: 1.0000 Epoch 174/1000 100/100 [==============================] - 0s - loss: 0.0031 - acc: 1.0000 Epoch 175/1000 100/100 [==============================] - 0s - loss: 0.0031 - acc: 1.0000 Epoch 176/1000 100/100 [==============================] - 0s - loss: 0.0030 - acc: 1.0000 Epoch 177/1000 100/100 [==============================] - 0s - loss: 0.0030 - acc: 1.0000 Epoch 178/1000 100/100 [==============================] - 0s - loss: 0.0030 - acc: 1.0000 Epoch 179/1000 100/100 [==============================] - 0s - loss: 0.0030 - acc: 1.0000 Epoch 180/1000 100/100 [==============================] - 0s - loss: 0.0030 - acc: 1.0000 Epoch 181/1000 100/100 [==============================] - 0s - loss: 0.0030 - acc: 1.0000 Epoch 182/1000 100/100 [==============================] - 0s - loss: 0.0029 - acc: 1.0000 Epoch 183/1000 100/100 [==============================] - 0s - loss: 0.0029 - acc: 1.0000 Epoch 184/1000 100/100 [==============================] - 0s - loss: 0.0029 - acc: 1.0000 Epoch 185/1000 100/100 [==============================] - 0s - loss: 0.0029 - acc: 1.0000 Epoch 186/1000 100/100 [==============================] - 0s - loss: 0.0029 - acc: 1.0000 Epoch 187/1000 100/100 [==============================] - 0s - loss: 0.0029 - acc: 1.0000 Epoch 188/1000 100/100 [==============================] - 0s - loss: 0.0029 - acc: 1.0000 Epoch 189/1000 100/100 [==============================] - 0s - loss: 0.0028 - acc: 1.0000 Epoch 190/1000 100/100 [==============================] - 0s - loss: 0.0028 - acc: 1.0000 Epoch 191/1000 100/100 [==============================] - 0s - loss: 0.0028 - acc: 1.0000 Epoch 192/1000 100/100 [==============================] - 0s - loss: 0.0028 - acc: 1.0000 Epoch 193/1000 100/100 [==============================] - 0s - loss: 0.0028 - acc: 1.0000 Epoch 194/1000 100/100 [==============================] - 0s - loss: 0.0028 - acc: 1.0000 Epoch 195/1000 100/100 [==============================] - 0s - loss: 0.0028 - acc: 1.0000 Epoch 196/1000 100/100 [==============================] - 0s - loss: 0.0027 - acc: 1.0000 Epoch 197/1000 100/100 [==============================] - 0s - loss: 0.0027 - acc: 1.0000 Epoch 198/1000 100/100 [==============================] - 0s - loss: 0.0027 - acc: 1.0000 Epoch 199/1000 100/100 [==============================] - 0s - loss: 0.0027 - acc: 1.0000 Epoch 200/1000 100/100 [==============================] - 0s - loss: 0.0027 - acc: 1.0000 Epoch 201/1000 100/100 [==============================] - 0s - loss: 0.0027 - acc: 1.0000 Epoch 202/1000 100/100 [==============================] - 0s - loss: 0.0027 - acc: 1.0000 Epoch 203/1000 100/100 [==============================] - 0s - loss: 0.0026 - acc: 1.0000 Epoch 204/1000 100/100 [==============================] - 0s - loss: 0.0026 - acc: 1.0000 Epoch 205/1000 100/100 [==============================] - 0s - loss: 0.0026 - acc: 1.0000 Epoch 206/1000 100/100 [==============================] - 0s - loss: 0.0026 - acc: 1.0000 Epoch 207/1000 100/100 [==============================] - 0s - loss: 0.0026 - acc: 1.0000 Epoch 208/1000 100/100 [==============================] - 0s - loss: 0.0026 - acc: 1.0000 Epoch 209/1000 100/100 [==============================] - 0s - loss: 0.0026 - acc: 1.0000 Epoch 210/1000 100/100 [==============================] - 0s - loss: 0.0026 - acc: 1.0000 Epoch 211/1000 100/100 [==============================] - 0s - loss: 0.0025 - acc: 1.0000 Epoch 212/1000 100/100 [==============================] - 0s - loss: 0.0025 - acc: 1.0000 Epoch 213/1000 100/100 [==============================] - 0s - loss: 0.0025 - acc: 1.0000 Epoch 214/1000 100/100 [==============================] - 0s - loss: 0.0025 - acc: 1.0000 Epoch 215/1000 100/100 [==============================] - 0s - loss: 0.0025 - acc: 1.0000 Epoch 216/1000 100/100 [==============================] - 0s - loss: 0.0025 - acc: 1.0000 Epoch 217/1000 100/100 [==============================] - 0s - loss: 0.0025 - acc: 1.0000 Epoch 218/1000 100/100 [==============================] - 0s - loss: 0.0025 - acc: 1.0000 Epoch 219/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 220/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 221/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 222/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 223/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 224/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 225/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 226/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 227/1000 100/100 [==============================] - 0s - loss: 0.0024 - acc: 1.0000 Epoch 228/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 229/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 230/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 231/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 232/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 233/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 234/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 235/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 236/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 237/1000 100/100 [==============================] - 0s - loss: 0.0023 - acc: 1.0000 Epoch 238/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 239/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 240/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 241/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 242/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 243/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 244/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 245/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 246/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 247/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 248/1000 100/100 [==============================] - 0s - loss: 0.0022 - acc: 1.0000 Epoch 249/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 250/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 251/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 252/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 253/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 254/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 255/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 256/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 257/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 258/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 259/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 260/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 261/1000 100/100 [==============================] - 0s - loss: 0.0021 - acc: 1.0000 Epoch 262/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 263/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 264/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 265/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 266/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 267/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 268/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 269/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 270/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 271/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 272/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 273/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 274/1000 100/100 [==============================] - 0s - loss: 0.0020 - acc: 1.0000 Epoch 275/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 276/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 277/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 278/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 279/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 280/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 281/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 282/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 283/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 284/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 285/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 286/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 287/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 288/1000 100/100 [==============================] - 0s - loss: 0.0019 - acc: 1.0000 Epoch 289/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 290/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 291/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 292/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 293/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 294/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 295/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 296/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 297/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 298/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 299/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 300/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 301/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 302/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 303/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 304/1000 100/100 [==============================] - 0s - loss: 0.0018 - acc: 1.0000 Epoch 305/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 306/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 307/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 308/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 309/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 310/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 311/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 312/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 313/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 314/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 315/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 316/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 317/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 318/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 319/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 320/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 321/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 322/1000 100/100 [==============================] - 0s - loss: 0.0017 - acc: 1.0000 Epoch 323/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 324/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 325/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 326/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 327/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 328/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 329/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 330/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 331/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 332/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 333/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 334/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 335/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 336/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 337/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 338/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 339/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 340/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 341/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 342/1000 100/100 [==============================] - 0s - loss: 0.0016 - acc: 1.0000 Epoch 343/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 344/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 345/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 346/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 347/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 348/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 349/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 350/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 351/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 352/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 353/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 354/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 355/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 356/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 357/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 358/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 359/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 360/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 361/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 362/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 363/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 364/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 365/1000 100/100 [==============================] - 0s - loss: 0.0015 - acc: 1.0000 Epoch 366/1000 100/100 [==============================] - 0s - loss: 0.0014 - acc: 1.0000 Epoch 367/1000 100/100 [==============================] - 0s - loss: 0.0014 - acc: 1.0000 Epoch 368/1000 100/100 [==============================] - 0s - loss: 0.0014 - acc: 1.0000 Epoch 369/1000 100/100 [==============================] - 0s - loss: 0.0014 - acc: 1.0000 Epoch 370/1000 20/100 [=====>........................] - ETA: 0s - loss: 0.0015 - acc: 1.0000 ###Markdown model1=train_lstm(train_features,Y_train) ###Code model1=train_lstm(train_features,Y_train) trainPredict_ = model1.predict(train_features) testPredict = model1.predict(test_features) print (trainPredict.shape) from sklearn.metrics import mean_squared_error # invert predictions #trainPredict = scaler.inverse_transform(trainPredict) #trainY = scaler.inverse_transform([Y_train]) #testPredict = scaler.inverse_transform(testPredict) #testY = scaler.inverse_transform([Y_test]) # calculate root mean squared error trainScore = math.sqrt(mean_squared_error(Y_train, trainPredict)) print('Train Score: %.2f RMSE' % (trainScore)) testScore = math.sqrt(mean_squared_error(Y_test, testPredict)) print('Test Score: %.2f RMSE' % (testScore)) # shift train predictions for plotting trainPredictPlot = np.empty_like(features) trainPredictPlot[:, :] = np.nan trainPredictPlot[1:len(trainPredict)+1, :] = trainPredict # shift test predictions for plotting testPredictPlot = np.empty_like(features) testPredictPlot[:, :] = np.nan testPredictPlot[len(trainPredict)+(1*2)+1:len(features)-1, :] = testPredict # plot baseline and predictions plt.plot(scaler.inverse_transform(features)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.show() ###Output (417, 1) Train Score: 0.03 RMSE Test Score: 0.04 RMSE ###Markdown track_id를 6자리 정수로 만들기 ###Code train_df = pd.read_csv("Label2.csv") num = 0 for column in train_df["track_id"]: train_df["track_id"][num] = (str(column).zfill(6) + "A") num += 1 a = 0 for column in train_df["track_id"]: train_df["track_id"][a] = train_df["track_id"][a][0:6] a += 1 train_df.shape #df = pd.DataFrame(train_df) #df.to_csv("new_id(1).csv") ###Output C:\Users\LG\Anaconda3\lib\site-packages\ipykernel_launcher.py:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy after removing the cwd from sys.path. C:\Users\LG\Anaconda3\lib\site-packages\pandas\core\indexing.py:194: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self._setitem_with_indexer(indexer, value) ###Markdown Image Load ###Code train_df["images"] = [np.array(load_img("../fma-master/img_all_edit/{}.jpg".format(idx))) for idx in np.unique(train_df["track_id"])] train_df.ndim train_df.shape train_df ###Output _____no_output_____ ###Markdown df = pd.DataFrame(train_df)df.to_csv("imagedata.csv")* 저장을 해도 여섯자리 string으로 저장이 안됨 Image Cut* 이미지의 여백을 잘라서 저장 * 556*216 import scipy.miscfor idx in np.unique(train_df["track_id"]): img_arr = np.array((load_img("../fma-master/img_all/{}.jpg".format(idx)))) edit_img = img_arr[11:227,12:568,:] scipy.misc.imsave('../fma-master/img_all_edit/{}.jpg'.format(idx), edit_img) ###Code #train_df["images"] #train_df["images"].reshape(7997,) ###Output _____no_output_____ ###Markdown 데이터 정리 ###Code from sklearn.model_selection import train_test_split #train/test/id/image/x/y등등 구분하는 cell ids_train, ids_valid, x_train, x_valid, y_train, y_valid= train_test_split( train_df.index.values, train_df.images, train_df.genre_top, test_size=0.2) ids_train.shape ids_valid.shape x_train x_valid.shape y_valid y_train.shape ###Output _____no_output_____ ###Markdown Build Model* 먼저 간단 모델* ResNet* VGG* extra ###Code from keras import layers from keras import models #model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) #model.add(layers.MaxPooling2D((2, 2))) #model.add(layers.Dropout(0.5)) #model.add(layers.Flatten()) #model.add(layers.Dense(64, activation='softmax')) #model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) ###Output _____no_output_____ ###Markdown Model 1 ###Code def build_model_1(): model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax')) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) return model ###Output _____no_output_____ ###Markdown Model 2 ###Code def build_model_2(): model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax')) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) return model ###Output _____no_output_____ ###Markdown Model 3 ###Code def build_model_3(): model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax')) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) return model ###Output _____no_output_____ ###Markdown Select THE Model ###Code model=build_model_1() model.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 3, 3, 64) 36928 _________________________________________________________________ dropout_1 (Dropout) (None, 3, 3, 64) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 576) 0 _________________________________________________________________ dense_1 (Dense) (None, 64) 36928 _________________________________________________________________ dense_2 (Dense) (None, 10) 650 ================================================================= Total params: 93,322 Trainable params: 93,322 Non-trainable params: 0 _________________________________________________________________ ###Markdown Fit ###Code model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_images, train_labels, epochs=5, batch_size=64) ###Output _____no_output_____ ###Markdown Evaluate ###Code test_loss, test_acc = model.evaluate(test_images, test_labels) ###Output _____no_output_____ ###Markdown データの取得 ###Code %matplotlib inline (x_train, y_train), (x_test, y_test) = cifar10.load_data() x_train = x_train / 255 x_test = x_test / 255 y_train = to_categorical(y_train, 10) y_test = to_categorical(y_test, 10) print(type(x_train)) model = Sequential() model.add(Conv2D(32,(3,3), padding='same', input_shape=(32,32,3))) model.add(Activation(LeakyReLU())) model.add(Conv2D(32,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(MaxPool2D(pool_size=(2,2))) model.add(Dropout(0.2)) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Dropout(0.2)) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(MaxPool2D(pool_size=(2,2))) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation(LeakyReLU())) model.add(Flatten()) model.add(Dense(1024)) model.add(Activation('relu')) model.add(Dense(1024)) model.add(Activation('relu')) model.add(Dense(512)) model.add(Activation('sigmoid')) model.add(Dropout(0.2)) model.add(Dense(10, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() history = model.fit(x_train, y_train, batch_size=16, epochs=20, verbose=1, validation_split=0.1) score = model.evaluate(x_test, y_test) print(score) ###Output 10000/10000 [==============================] - 4s 447us/step [2.3250877521514894, 0.1]
python/examples/ipynb/AI_platform_demo.ipynb
###Markdown IntroductionThis is a demonstration notebook. Suppose you have developed a model the training of which is constrained by the resources available to the notbook VM. In that case, you may want to use the [Google AI Platform](https://cloud.google.com/ml-engine/docs/tensorflow/) to train your model. The advantage of that is that long-running or resource intensive training jobs can be performed in the background. Also, to use your trained model in Earth Engine, it needs to be [deployed as a hosted model](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) on AI Platform. This notebook uses previously created training data (see [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb)) and AI Platform to train a model, deploy it and use it to make predictions in Earth Engine. To do that, code [needs to be structured as a python package](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) that can be uploaded to AI Platform. The following cells produce that package programatically. Setup software librariesInstall needed libraries to the notebook VM. Authenticate as necessary. ###Code # Cloud authentication. from google.colab import auth auth.authenticate_user() # Earth Engine install to notebook VM, authenticate. !pip install earthengine-api # Import and initialize the Earth Engine library. import ee ee.Authenticate() ee.Initialize() # Tensorflow setup. import tensorflow as tf tf.enable_eager_execution() print(tf.__version__) # Folium setup. import folium print(folium.__version__) # Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' ###Output _____no_output_____ ###Markdown Training code package setupIt's necessary to create a Python package to hold the training code. Here we're going to get started with that by creating a folder for the package and adding an empty `__init__.py` file. ###Code PACKAGE_PATH = 'ai_platform_demo' !ls -l !mkdir {PACKAGE_PATH} !touch {PACKAGE_PATH}/__init__.py !ls -l {PACKAGE_PATH} ###Output _____no_output_____ ###Markdown VariablesThese variables need to be stored in a place where other code can access them. There are a variety of ways of accomplishing that, but here we'll use the `%%writefile` command to write the contents of the code cell to a file called `config.py`.**Note:** You need to insert the name of a bucket (below) to which you have write access! ###Code %%writefile {PACKAGE_PATH}/config.py import tensorflow as tf # INSERT YOUR BUCKET HERE! BUCKET = 'your-bucket-name' # Specify names of output locations in Cloud Storage. FOLDER = 'fcnn-demo' JOB_DIR = 'gs://' + BUCKET + '/' + FOLDER + '/trainer' MODEL_DIR = JOB_DIR + '/model' LOGS_DIR = JOB_DIR + '/logs' # Pre-computed training and eval data. DATA_BUCKET = 'ee-docs-demos' TRAINING_BASE = 'training_patches' EVAL_BASE = 'eval_patches' # Specify inputs (Landsat bands) to the model and the response variable. opticalBands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7'] thermalBands = ['B10', 'B11'] BANDS = opticalBands + thermalBands RESPONSE = 'impervious' FEATURES = BANDS + [RESPONSE] # Specify the size and shape of patches expected by the model. KERNEL_SIZE = 256 KERNEL_SHAPE = [KERNEL_SIZE, KERNEL_SIZE] COLUMNS = [ tf.io.FixedLenFeature(shape=KERNEL_SHAPE, dtype=tf.float32) for k in FEATURES ] FEATURES_DICT = dict(zip(FEATURES, COLUMNS)) # Sizes of the training and evaluation datasets. TRAIN_SIZE = 16000 EVAL_SIZE = 8000 # Specify model training parameters. BATCH_SIZE = 16 EPOCHS = 50 BUFFER_SIZE = 3000 OPTIMIZER = 'SGD' LOSS = 'MeanSquaredError' METRICS = ['RootMeanSquaredError'] ###Output _____no_output_____ ###Markdown Verify that the written file has the expected contents and is working as intended. ###Code !cat {PACKAGE_PATH}/config.py from ai_platform_demo import config print('\n\n', config.BATCH_SIZE) ###Output _____no_output_____ ###Markdown Training data, evaluation data and modelThe following is code to load training/evaluation data and the model. Write this into `model.py`. Note that these functions are developed and explained in [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb). The source of the model code is [this demonstration notebook](https://github.com/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb). ###Code %%writefile {PACKAGE_PATH}/model.py from . import config import tensorflow as tf from tensorflow.python.keras import layers from tensorflow.python.keras import losses from tensorflow.python.keras import metrics from tensorflow.python.keras import models from tensorflow.python.keras import optimizers # Dataset loading functions def parse_tfrecord(example_proto): return tf.io.parse_single_example(example_proto, config.FEATURES_DICT) def to_tuple(inputs): inputsList = [inputs.get(key) for key in config.FEATURES] stacked = tf.stack(inputsList, axis=0) stacked = tf.transpose(stacked, [1, 2, 0]) return stacked[:,:,:len(config.BANDS)], stacked[:,:,len(config.BANDS):] def get_dataset(pattern): glob = tf.io.gfile.glob(pattern) dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') dataset = dataset.map(parse_tfrecord) dataset = dataset.map(to_tuple) return dataset def get_training_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.TRAINING_BASE + '*' dataset = get_dataset(glob) dataset = dataset.shuffle(config.BUFFER_SIZE).batch(config.BATCH_SIZE).repeat() return dataset def get_eval_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.EVAL_BASE + '*' dataset = get_dataset(glob) dataset = dataset.batch(1).repeat() return dataset # A variant of the UNET model. def conv_block(input_tensor, num_filters): encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(input_tensor) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(encoder) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) return encoder def encoder_block(input_tensor, num_filters): encoder = conv_block(input_tensor, num_filters) encoder_pool = layers.MaxPooling2D((2, 2), strides=(2, 2))(encoder) return encoder_pool, encoder def decoder_block(input_tensor, concat_tensor, num_filters): decoder = layers.Conv2DTranspose(num_filters, (2, 2), strides=(2, 2), padding='same')(input_tensor) decoder = layers.concatenate([concat_tensor, decoder], axis=-1) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) return decoder def get_model(): inputs = layers.Input(shape=[None, None, len(config.BANDS)]) # 256 encoder0_pool, encoder0 = encoder_block(inputs, 32) # 128 encoder1_pool, encoder1 = encoder_block(encoder0_pool, 64) # 64 encoder2_pool, encoder2 = encoder_block(encoder1_pool, 128) # 32 encoder3_pool, encoder3 = encoder_block(encoder2_pool, 256) # 16 encoder4_pool, encoder4 = encoder_block(encoder3_pool, 512) # 8 center = conv_block(encoder4_pool, 1024) # center decoder4 = decoder_block(center, encoder4, 512) # 16 decoder3 = decoder_block(decoder4, encoder3, 256) # 32 decoder2 = decoder_block(decoder3, encoder2, 128) # 64 decoder1 = decoder_block(decoder2, encoder1, 64) # 128 decoder0 = decoder_block(decoder1, encoder0, 32) # 256 outputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(decoder0) model = models.Model(inputs=[inputs], outputs=[outputs]) model.compile( optimizer=optimizers.get(config.OPTIMIZER), loss=losses.get(config.LOSS), metrics=[metrics.get(metric) for metric in config.METRICS]) return model ###Output _____no_output_____ ###Markdown Verify that `model.py` is functioning as intended. ###Code from ai_platform_demo import model eval = model.get_eval_dataset() print(iter(eval.take(1)).next()) model = model.get_model() print(model.summary()) ###Output _____no_output_____ ###Markdown Training taskAt this stage, there should be `config.py` storing variables and `model.py` which has code for getting the training/evaluation data and the model. All that's left is code for training the model. The following will create `task.py`, which will get the training and eval data, train the model and save it when it's done in a Cloud Storage bucket. ###Code %%writefile {PACKAGE_PATH}/task.py from . import config from . import model import tensorflow as tf if __name__ == '__main__': training = model.get_training_dataset() evaluation = model.get_eval_dataset() m = model.get_model() m.fit( x=training, epochs=config.EPOCHS, steps_per_epoch=int(config.TRAIN_SIZE / config.BATCH_SIZE), validation_data=evaluation, validation_steps=int(config.EVAL_SIZE), callbacks=[tf.keras.callbacks.TensorBoard(config.LOGS_DIR)]) tf.contrib.saved_model.save_keras_model(m, config.MODEL_DIR) ###Output _____no_output_____ ###Markdown Submit the package to AI Platform for trainingNow there's everything to submit this job, which can be done from the command line. First, define some needed variables.**Note:** You need to insert the name of a Cloud project (below) you own! ###Code import time # INSERT YOUR PROJECT HERE! PROJECT = 'your-project' JOB_NAME = 'demo_training_job_' + str(int(time.time())) TRAINER_PACKAGE_PATH = 'ai_platform_demo' MAIN_TRAINER_MODULE = 'ai_platform_demo.task' REGION = 'us-central1' ###Output _____no_output_____ ###Markdown Now the training job is ready to be started. First, you need to enable the ML API for your project. This can be done from [this link to the Cloud Console](https://console.developers.google.com/apis/library/ml.googleapis.com). See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/training-jobs) for details. Note that the Python and Tensorflow versions should match what is used in the Colab notebook. ###Code !gcloud ai-platform jobs submit training {JOB_NAME} \ --job-dir {config.JOB_DIR} \ --package-path {TRAINER_PACKAGE_PATH} \ --module-name {MAIN_TRAINER_MODULE} \ --region {REGION} \ --project {PROJECT} \ --runtime-version 1.14 \ --python-version 3.5 \ --scale-tier basic-gpu ###Output _____no_output_____ ###Markdown Monitor the training jobThere's not much more to do until the model is finished training (~24 hours), but it's fun and useful to monitor its progress. You can do that progamatically with another `gcloud` command. The output of that command can be read into an `IPython.utils.text.SList` from which the `state` is extracted and ensured to be `SUCCEEDED`. Or you can monitor it from the [AI Platform jobs page](http://console.cloud.google.com/ai-platform/jobs) on the Cloud Console. ###Code desc = !gcloud ai-platform jobs describe {JOB_NAME} --project {PROJECT} state = desc.grep('state:')[0].split(':')[1].strip() print(state) ###Output _____no_output_____ ###Markdown Inspect the trained modelOnce the training job has finished, verify that you can load the trained model and print a summary of the fitted parameters. It's also useful to examine the logs with [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard). There's a convenient notebook extension that will launch TensorBoard in the Colab notebook. Examine the training and testing learning curves to ensure that the training process has converged. ###Code %load_ext tensorboard %tensorboard --logdir {config.LOGS_DIR} ###Output _____no_output_____ ###Markdown Prepare the model for making predictions in Earth EngineBefore we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accomdate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predictbinary_data_in_prediction_input) for details.) `earthengine model prepare`The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically: ###Code from tensorflow.python.tools import saved_model_utils meta_graph_def = saved_model_utils.get_meta_graph_def(config.MODEL_DIR, 'serve') inputs = meta_graph_def.signature_def['serving_default'].inputs outputs = meta_graph_def.signature_def['serving_default'].outputs # Just get the first thing(s) from the serving signature def. i.e. this # model only has a single input and a single output. input_name = None for k,v in inputs.items(): input_name = v.name break output_name = None for k,v in outputs.items(): output_name = v.name break # Make a dictionary that maps Earth Engine outputs and inputs to # AI Platform inputs and outputs, respectively. import json input_dict = "'" + json.dumps({input_name: "array"}) + "'" output_dict = "'" + json.dumps({output_name: "impervious"}) + "'" # Put the EEified model next to the trained model directory. EEIFIED_DIR = config.JOB_DIR + '/eeified' # You need to set the project before using the model prepare command. !earthengine set_project {PROJECT} !earthengine model prepare --source_dir {config.MODEL_DIR} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict} ###Output _____no_output_____ ###Markdown Note that you can also use the TensorFlow saved model command line tool to do this manually. See [this doc](https://www.tensorflow.org/guide/saved_modelcli_to_inspect_and_execute_savedmodel) for details. Also note the names we've specified for the new inputs and outputs: `array` and `impervious`, respectively. Perform inference using the trained model in Earth EngineBefore it's possible to get predictions from the trained and EEified model, it needs to be deployed on AI Platform. The first step is to create the model. The second step is to create a version. See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) for details. Note that models and versions can be monitored from the [AI Platform models page](http://console.cloud.google.com/ai-platform/models) of the Cloud Console. To ensure that the model is ready for predictions without having to warm up nodes, you can use a configuration yaml file to set the scaling type of this version to autoScaling, and, set a minimum number of nodes for the version. This will ensure there are always nodes on stand-by, however, you will be charged as long as they are running. For this example, we'll set the minNodes to 10. That means that at a minimum, 10 nodes are always up and running and waiting for predictions. The number of nodes will also scale up automatically if needed. ###Code %%writefile config.yaml autoScaling: minNodes: 10 MODEL_NAME = 'fcnn_demo_model' VERSION_NAME = 'v' + str(int(time.time())) print('Creating version: ' + VERSION_NAME) !gcloud ai-platform models create {MODEL_NAME} --project {PROJECT} !gcloud ai-platform versions create {VERSION_NAME} \ --project {PROJECT} \ --model {MODEL_NAME} \ --origin {EEIFIED_DIR} \ --runtime-version=1.14 \ --framework "TENSORFLOW" \ --python-version=3.5 --config=config.yaml ###Output _____no_output_____ ###Markdown There is now a trained model, prepared for serving to Earth Engine, hosted and versioned on AI Platform. We can now connect Earth Engine directly to the trained model for inference. You do that with the `ee.Model.fromAiPlatformPredictor` command. `ee.Model.fromAiPlatformPredictor`For this command to work, we need to know a lot about the model. To connect to the model, you need to know the name and version. InputsYou need to be able to recreate the imagery on which it was trained in order to perform inference. Specifically, you need to create an array-valued input from the scaled data and use that for input. (Recall that the new input node is named `array`, which is convenient because the array image has one band, named `array` by default.) The inputs will be provided as 144x144 patches (`inputTileSize`), at 30-meter resolution (`proj`), but 8 pixels will be thrown out (`inputOverlapSize`) to minimize boundary effects. OutputsThe output (which you also need to know), is a single float band named `impervious`. ###Code # Use Landsat 8 surface reflectance data. l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') # Cloud masking function. def maskL8sr(image): cloudShadowBitMask = ee.Number(2).pow(3).int() cloudsBitMask = ee.Number(2).pow(5).int() qa = image.select('pixel_qa') mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And( qa.bitwiseAnd(cloudsBitMask).eq(0)) mask2 = image.mask().reduce('min') mask3 = image.select(config.opticalBands).gt(0).And( image.select(config.opticalBands).lt(10000)).reduce('min') mask = mask1.And(mask2).And(mask3) return image.select(config.opticalBands).divide(10000).addBands( image.select(config.thermalBands).divide(10).clamp(273.15, 373.15) .subtract(273.15).divide(100)).updateMask(mask) # The image input data is a cloud-masked median composite. image = l8sr.filterDate( '2015-01-01', '2017-12-31').map(maskL8sr).median().select(config.BANDS).float() # Load the trained model and use it for prediction. model = ee.Model.fromAiPlatformPredictor( projectName = PROJECT, modelName = MODEL_NAME, version = VERSION_NAME, inputTileSize = [144, 144], inputOverlapSize = [8, 8], proj = ee.Projection('EPSG:4326').atScale(30), fixInputProj = True, outputBands = {'impervious': { 'type': ee.PixelType.float() } } ) predictions = model.predictImage(image.toArray()) # Use folium to visualize the input imagery and the predictions. mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3}) map = folium.Map(location=[38., -122.5], zoom_start=13) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='median composite', ).add_to(map) mapid = predictions.getMapId({'min': 0, 'max': 1}) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='predictions', ).add_to(map) map.add_child(folium.LayerControl()) map ###Output _____no_output_____ ###Markdown Run in Google Colab View source on GitHub IntroductionThis is a demonstration notebook. Suppose you have developed a model the training of which is constrained by the resources available to the notbook VM. In that case, you may want to use the [Google AI Platform](https://cloud.google.com/ml-engine/docs/tensorflow/) to train your model. The advantage of that is that long-running or resource intensive training jobs can be performed in the background. Also, to use your trained model in Earth Engine, it needs to be [deployed as a hosted model](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) on AI Platform. This notebook uses previously created training data (see [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb)) and AI Platform to train a model, deploy it and use it to make predictions in Earth Engine. To do that, code [needs to be structured as a python package](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) that can be uploaded to AI Platform. The following cells produce that package programatically. Setup software librariesInstall needed libraries to the notebook VM. Authenticate as necessary. ###Code # Cloud authentication. from google.colab import auth auth.authenticate_user() # Import and initialize the Earth Engine library. import ee ee.Authenticate() ee.Initialize() # Tensorflow setup. import tensorflow as tf print(tf.__version__) # Folium setup. import folium print(folium.__version__) ###Output _____no_output_____ ###Markdown Training code package setupIt's necessary to create a Python package to hold the training code. Here we're going to get started with that by creating a folder for the package and adding an empty `__init__.py` file. ###Code PACKAGE_PATH = 'ai_platform_demo' !ls -l !mkdir {PACKAGE_PATH} !touch {PACKAGE_PATH}/__init__.py !ls -l {PACKAGE_PATH} ###Output _____no_output_____ ###Markdown VariablesThese variables need to be stored in a place where other code can access them. There are a variety of ways of accomplishing that, but here we'll use the `%%writefile` command to write the contents of the code cell to a file called `config.py`.**Note:** You need to insert the name of a bucket (below) to which you have write access! ###Code %%writefile {PACKAGE_PATH}/config.py import tensorflow as tf # INSERT YOUR PROJECT HERE! PROJECT = 'your-project' # INSERT YOUR BUCKET HERE! BUCKET = 'your-bucket' # Specify names of output locations in Cloud Storage. FOLDER = 'fcnn-demo' JOB_DIR = 'gs://' + BUCKET + '/' + FOLDER + '/trainer' MODEL_DIR = JOB_DIR + '/model' LOGS_DIR = JOB_DIR + '/logs' # Put the EEified model next to the trained model directory. EEIFIED_DIR = JOB_DIR + '/eeified' # Pre-computed training and eval data. DATA_BUCKET = 'ee-docs-demos' TRAINING_BASE = 'training_patches' EVAL_BASE = 'eval_patches' # Specify inputs (Landsat bands) to the model and the response variable. opticalBands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7'] thermalBands = ['B10', 'B11'] BANDS = opticalBands + thermalBands RESPONSE = 'impervious' FEATURES = BANDS + [RESPONSE] # Specify the size and shape of patches expected by the model. KERNEL_SIZE = 256 KERNEL_SHAPE = [KERNEL_SIZE, KERNEL_SIZE] COLUMNS = [ tf.io.FixedLenFeature(shape=KERNEL_SHAPE, dtype=tf.float32) for k in FEATURES ] FEATURES_DICT = dict(zip(FEATURES, COLUMNS)) # Sizes of the training and evaluation datasets. TRAIN_SIZE = 16000 EVAL_SIZE = 8000 # Specify model training parameters. BATCH_SIZE = 16 EPOCHS = 50 BUFFER_SIZE = 3000 OPTIMIZER = 'SGD' LOSS = 'MeanSquaredError' METRICS = ['RootMeanSquaredError'] ###Output _____no_output_____ ###Markdown Verify that the written file has the expected contents and is working as intended. ###Code !cat {PACKAGE_PATH}/config.py from ai_platform_demo import config print('\n\n', config.BATCH_SIZE) ###Output _____no_output_____ ###Markdown Training data, evaluation data and modelThe following is code to load training/evaluation data and the model. Write this into `model.py`. Note that these functions are developed and explained in [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb). ###Code %%writefile {PACKAGE_PATH}/model.py from . import config import tensorflow as tf from tensorflow.python.keras import layers from tensorflow.python.keras import losses from tensorflow.python.keras import metrics from tensorflow.python.keras import models from tensorflow.python.keras import optimizers # Dataset loading functions def parse_tfrecord(example_proto): return tf.io.parse_single_example(example_proto, config.FEATURES_DICT) def to_tuple(inputs): inputsList = [inputs.get(key) for key in config.FEATURES] stacked = tf.stack(inputsList, axis=0) stacked = tf.transpose(stacked, [1, 2, 0]) return stacked[:,:,:len(config.BANDS)], stacked[:,:,len(config.BANDS):] def get_dataset(pattern): glob = tf.io.gfile.glob(pattern) dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') dataset = dataset.map(parse_tfrecord) dataset = dataset.map(to_tuple) return dataset def get_training_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.TRAINING_BASE + '*' dataset = get_dataset(glob) dataset = dataset.shuffle(config.BUFFER_SIZE).batch(config.BATCH_SIZE).repeat() return dataset def get_eval_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.EVAL_BASE + '*' dataset = get_dataset(glob) dataset = dataset.batch(1).repeat() return dataset # A variant of the UNET model. def conv_block(input_tensor, num_filters): encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(input_tensor) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(encoder) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) return encoder def encoder_block(input_tensor, num_filters): encoder = conv_block(input_tensor, num_filters) encoder_pool = layers.MaxPooling2D((2, 2), strides=(2, 2))(encoder) return encoder_pool, encoder def decoder_block(input_tensor, concat_tensor, num_filters): decoder = layers.Conv2DTranspose(num_filters, (2, 2), strides=(2, 2), padding='same')(input_tensor) decoder = layers.concatenate([concat_tensor, decoder], axis=-1) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) return decoder def get_model(): inputs = layers.Input(shape=[None, None, len(config.BANDS)]) # 256 encoder0_pool, encoder0 = encoder_block(inputs, 32) # 128 encoder1_pool, encoder1 = encoder_block(encoder0_pool, 64) # 64 encoder2_pool, encoder2 = encoder_block(encoder1_pool, 128) # 32 encoder3_pool, encoder3 = encoder_block(encoder2_pool, 256) # 16 encoder4_pool, encoder4 = encoder_block(encoder3_pool, 512) # 8 center = conv_block(encoder4_pool, 1024) # center decoder4 = decoder_block(center, encoder4, 512) # 16 decoder3 = decoder_block(decoder4, encoder3, 256) # 32 decoder2 = decoder_block(decoder3, encoder2, 128) # 64 decoder1 = decoder_block(decoder2, encoder1, 64) # 128 decoder0 = decoder_block(decoder1, encoder0, 32) # 256 outputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(decoder0) model = models.Model(inputs=[inputs], outputs=[outputs]) model.compile( optimizer=optimizers.get(config.OPTIMIZER), loss=losses.get(config.LOSS), metrics=[metrics.get(metric) for metric in config.METRICS]) return model ###Output _____no_output_____ ###Markdown Verify that `model.py` is functioning as intended. ###Code from ai_platform_demo import model eval = model.get_eval_dataset() print(iter(eval.take(1)).next()) model = model.get_model() print(model.summary()) ###Output _____no_output_____ ###Markdown Training taskAt this stage, there should be `config.py` storing variables and `model.py` which has code for getting the training/evaluation data and the model. All that's left is code for training the model. The following will create `task.py`, which will get the training and eval data, train the model and save it when it's done in a Cloud Storage bucket. ###Code %%writefile {PACKAGE_PATH}/task.py from . import config from . import model import tensorflow as tf if __name__ == '__main__': training = model.get_training_dataset() evaluation = model.get_eval_dataset() m = model.get_model() m.fit( x=training, epochs=config.EPOCHS, steps_per_epoch=int(config.TRAIN_SIZE / config.BATCH_SIZE), validation_data=evaluation, validation_steps=int(config.EVAL_SIZE), callbacks=[tf.keras.callbacks.TensorBoard(config.LOGS_DIR)]) m.save(config.MODEL_DIR, save_format='tf') ###Output _____no_output_____ ###Markdown Submit the package to AI Platform for trainingNow there's everything to submit this job, which can be done from the command line. First, define some needed variables.**Note:** You need to insert the name of a Cloud project (below) you own! ###Code import time JOB_NAME = 'demo_training_job_' + str(int(time.time())) TRAINER_PACKAGE_PATH = 'ai_platform_demo' MAIN_TRAINER_MODULE = 'ai_platform_demo.task' REGION = 'us-central1' ###Output _____no_output_____ ###Markdown Now the training job is ready to be started. First, you need to enable the ML API for your project. This can be done from [this link to the Cloud Console](https://console.developers.google.com/apis/library/ml.googleapis.com). See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/training-jobs) for details. Note that the Python and Tensorflow versions should match what is used in the Colab notebook. ###Code !gcloud ai-platform jobs submit training {JOB_NAME} \ --job-dir {config.JOB_DIR} \ --package-path {TRAINER_PACKAGE_PATH} \ --module-name {MAIN_TRAINER_MODULE} \ --region {REGION} \ --project {config.PROJECT} \ --runtime-version 2.1 \ --python-version 3.7 \ --scale-tier basic-gpu ###Output _____no_output_____ ###Markdown Monitor the training jobThere's not much more to do until the model is finished training (~24 hours), but it's fun and useful to monitor its progress. You can do that progamatically with another `gcloud` command. The output of that command can be read into an `IPython.utils.text.SList` from which the `state` is extracted and ensured to be `SUCCEEDED`. Or you can monitor it from the [AI Platform jobs page](http://console.cloud.google.com/ai-platform/jobs) on the Cloud Console. ###Code desc = !gcloud ai-platform jobs describe {JOB_NAME} --project {PROJECT} state = desc.grep('state:')[0].split(':')[1].strip() print(state) ###Output _____no_output_____ ###Markdown Inspect the trained modelOnce the training job has finished, verify that you can load the trained model and print a summary of the fitted parameters. It's also useful to examine the logs with [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard). There's a convenient notebook extension that will launch TensorBoard in the Colab notebook. Examine the training and testing learning curves to ensure that the training process has converged. ###Code %load_ext tensorboard %tensorboard --logdir {config.LOGS_DIR} ###Output _____no_output_____ ###Markdown Prepare the model for making predictions in Earth EngineBefore we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accomdate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predictbinary_data_in_prediction_input) for details.) `earthengine model prepare`The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically: ###Code from tensorflow.python.tools import saved_model_utils meta_graph_def = saved_model_utils.get_meta_graph_def(config.MODEL_DIR, 'serve') inputs = meta_graph_def.signature_def['serving_default'].inputs outputs = meta_graph_def.signature_def['serving_default'].outputs # Just get the first thing(s) from the serving signature def. i.e. this # model only has a single input and a single output. input_name = None for k,v in inputs.items(): input_name = v.name break output_name = None for k,v in outputs.items(): output_name = v.name break # Make a dictionary that maps Earth Engine outputs and inputs to # AI Platform inputs and outputs, respectively. import json input_dict = "'" + json.dumps({input_name: "array"}) + "'" output_dict = "'" + json.dumps({output_name: "impervious"}) + "'" # You need to set the project before using the model prepare command. !earthengine set_project {PROJECT} !earthengine model prepare --source_dir {config.MODEL_DIR} --dest_dir {config.EEIFIED_DIR} --input {input_dict} --output {output_dict} ###Output _____no_output_____ ###Markdown Note that you can also use the TensorFlow saved model command line tool to do this manually. See [this doc](https://www.tensorflow.org/guide/saved_modelcli_to_inspect_and_execute_savedmodel) for details. Also note the names we've specified for the new inputs and outputs: `array` and `impervious`, respectively. Perform inference using the trained model in Earth EngineBefore it's possible to get predictions from the trained and EEified model, it needs to be deployed on AI Platform. The first step is to create the model. The second step is to create a version. See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) for details. Note that models and versions can be monitored from the [AI Platform models page](http://console.cloud.google.com/ai-platform/models) of the Cloud Console.To ensure that the model is ready for predictions without having to warm up nodes, you can use a configuration yaml file to set the scaling type of this version to `autoScaling`, and, set a minimum number of nodes for the version. This will ensure there are always nodes on stand-by, however, you will be charged as long as they are running. For this example, we'll set the `minNodes` to 10. That means that at a minimum, 10 nodes are always up and running and waiting for predictions. The number of nodes will also scale up automatically if needed. ###Code %%writefile config.yaml autoScaling: minNodes: 10 MODEL_NAME = 'fcnn_demo_model' VERSION_NAME = 'v' + str(int(time.time())) print('Creating version: ' + VERSION_NAME) !gcloud ai-platform models create {MODEL_NAME} --project {config.PROJECT} !gcloud ai-platform versions create {VERSION_NAME} \ --project {config.PROJECT} \ --model {MODEL_NAME} \ --origin {config.EEIFIED_DIR} \ --framework "TENSORFLOW" \ --runtime-version 2.1 \ --python-version 3.7 \ --config=config.yaml ###Output _____no_output_____ ###Markdown There is now a trained model, prepared for serving to Earth Engine, hosted and versioned on AI Platform. We can now connect Earth Engine directly to the trained model for inference. You do that with the `ee.Model.fromAiPlatformPredictor` command. `ee.Model.fromAiPlatformPredictor`For this command to work, we need to know a lot about the model. To connect to the model, you need to know the name and version. InputsYou need to be able to recreate the imagery on which it was trained in order to perform inference. Specifically, you need to create an array-valued input from the scaled data and use that for input. (Recall that the new input node is named `array`, which is convenient because the array image has one band, named `array` by default.) The inputs will be provided as 144x144 patches (`inputTileSize`), at 30-meter resolution (`proj`), but 8 pixels will be thrown out (`inputOverlapSize`) to minimize boundary effects. OutputsThe output (which you also need to know), is a single float band named `impervious`. ###Code # Use Landsat 8 surface reflectance data. l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') # Cloud masking function. def maskL8sr(image): cloudShadowBitMask = ee.Number(2).pow(3).int() cloudsBitMask = ee.Number(2).pow(5).int() qa = image.select('pixel_qa') mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And( qa.bitwiseAnd(cloudsBitMask).eq(0)) mask2 = image.mask().reduce('min') mask3 = image.select(config.opticalBands).gt(0).And( image.select(config.opticalBands).lt(10000)).reduce('min') mask = mask1.And(mask2).And(mask3) return image.select(config.opticalBands).divide(10000).addBands( image.select(config.thermalBands).divide(10).clamp(273.15, 373.15) .subtract(273.15).divide(100)).updateMask(mask) # The image input data is a cloud-masked median composite. image = l8sr.filterDate( '2015-01-01', '2017-12-31').map(maskL8sr).median().select(config.BANDS).float() # Load the trained model and use it for prediction. model = ee.Model.fromAiPlatformPredictor( projectName = config.PROJECT, modelName = MODEL_NAME, version = VERSION_NAME, inputTileSize = [144, 144], inputOverlapSize = [8, 8], proj = ee.Projection('EPSG:4326').atScale(30), fixInputProj = True, outputBands = {'impervious': { 'type': ee.PixelType.float() } } ) predictions = model.predictImage(image.toArray()) # Use folium to visualize the input imagery and the predictions. mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3}) map = folium.Map(location=[38., -122.5], zoom_start=13) folium.TileLayer( tiles=mapid['tile_fetcher'].url_format, attr='Google Earth Engine', overlay=True, name='median composite', ).add_to(map) mapid = predictions.getMapId({'min': 0, 'max': 1}) folium.TileLayer( tiles=mapid['tile_fetcher'].url_format, attr='Google Earth Engine', overlay=True, name='predictions', ).add_to(map) map.add_child(folium.LayerControl()) map ###Output _____no_output_____ ###Markdown Run in Google Colab View source on GitHub IntroductionThis is a demonstration notebook. Suppose you have developed a model the training of which is constrained by the resources available to the notbook VM. In that case, you may want to use the [Google AI Platform](https://cloud.google.com/ml-engine/docs/tensorflow/) to train your model. The advantage of that is that long-running or resource intensive training jobs can be performed in the background. Also, to use your trained model in Earth Engine, it needs to be [deployed as a hosted model](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) on AI Platform. This notebook uses previously created training data (see [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb)) and AI Platform to train a model, deploy it and use it to make predictions in Earth Engine. To do that, code [needs to be structured as a python package](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) that can be uploaded to AI Platform. The following cells produce that package programmatically. Setup software librariesInstall needed libraries to the notebook VM. Authenticate as necessary. ###Code # Cloud authentication. from google.colab import auth auth.authenticate_user() # Import and initialize the Earth Engine library. import ee ee.Authenticate() ee.Initialize() # Tensorflow setup. import tensorflow as tf print(tf.__version__) # Folium setup. import folium print(folium.__version__) ###Output _____no_output_____ ###Markdown Training code package setupIt's necessary to create a Python package to hold the training code. Here we're going to get started with that by creating a folder for the package and adding an empty `__init__.py` file. ###Code PACKAGE_PATH = 'ai_platform_demo' !ls -l !mkdir {PACKAGE_PATH} !touch {PACKAGE_PATH}/__init__.py !ls -l {PACKAGE_PATH} ###Output _____no_output_____ ###Markdown VariablesThese variables need to be stored in a place where other code can access them. There are a variety of ways of accomplishing that, but here we'll use the `%%writefile` command to write the contents of the code cell to a file called `config.py`.**Note:** You need to insert the name of a bucket (below) to which you have write access! ###Code %%writefile {PACKAGE_PATH}/config.py import tensorflow as tf # INSERT YOUR PROJECT HERE! PROJECT = 'your-project' # INSERT YOUR BUCKET HERE! BUCKET = 'your-bucket' # This is a good region for hosting AI models. REGION = 'us-central1' # Specify names of output locations in Cloud Storage. FOLDER = 'fcnn-demo' JOB_DIR = 'gs://' + BUCKET + '/' + FOLDER + '/trainer' MODEL_DIR = JOB_DIR + '/model' LOGS_DIR = JOB_DIR + '/logs' # Put the EEified model next to the trained model directory. EEIFIED_DIR = JOB_DIR + '/eeified' # Pre-computed training and eval data. DATA_BUCKET = 'ee-docs-demos' TRAINING_BASE = 'training_patches' EVAL_BASE = 'eval_patches' # Specify inputs (Landsat bands) to the model and the response variable. opticalBands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7'] thermalBands = ['B10', 'B11'] BANDS = opticalBands + thermalBands RESPONSE = 'impervious' FEATURES = BANDS + [RESPONSE] # Specify the size and shape of patches expected by the model. KERNEL_SIZE = 256 KERNEL_SHAPE = [KERNEL_SIZE, KERNEL_SIZE] COLUMNS = [ tf.io.FixedLenFeature(shape=KERNEL_SHAPE, dtype=tf.float32) for k in FEATURES ] FEATURES_DICT = dict(zip(FEATURES, COLUMNS)) # Sizes of the training and evaluation datasets. TRAIN_SIZE = 16000 EVAL_SIZE = 8000 # Specify model training parameters. BATCH_SIZE = 16 EPOCHS = 50 BUFFER_SIZE = 3000 OPTIMIZER = 'SGD' LOSS = 'MeanSquaredError' METRICS = ['RootMeanSquaredError'] ###Output _____no_output_____ ###Markdown Verify that the written file has the expected contents and is working as intended. ###Code !cat {PACKAGE_PATH}/config.py from ai_platform_demo import config print('\n\n', config.BATCH_SIZE) ###Output _____no_output_____ ###Markdown Training data, evaluation data and modelThe following is code to load training/evaluation data and the model. Write this into `model.py`. Note that these functions are developed and explained in [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb). ###Code %%writefile {PACKAGE_PATH}/model.py from . import config import tensorflow as tf from tensorflow.python.keras import layers from tensorflow.python.keras import losses from tensorflow.python.keras import metrics from tensorflow.python.keras import models from tensorflow.python.keras import optimizers # Dataset loading functions def parse_tfrecord(example_proto): return tf.io.parse_single_example(example_proto, config.FEATURES_DICT) def to_tuple(inputs): inputsList = [inputs.get(key) for key in config.FEATURES] stacked = tf.stack(inputsList, axis=0) stacked = tf.transpose(stacked, [1, 2, 0]) return stacked[:,:,:len(config.BANDS)], stacked[:,:,len(config.BANDS):] def get_dataset(pattern): glob = tf.io.gfile.glob(pattern) dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') dataset = dataset.map(parse_tfrecord) dataset = dataset.map(to_tuple) return dataset def get_training_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.TRAINING_BASE + '*' dataset = get_dataset(glob) dataset = dataset.shuffle(config.BUFFER_SIZE).batch(config.BATCH_SIZE).repeat() return dataset def get_eval_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.EVAL_BASE + '*' dataset = get_dataset(glob) dataset = dataset.batch(1).repeat() return dataset # A variant of the UNET model. def conv_block(input_tensor, num_filters): encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(input_tensor) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(encoder) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) return encoder def encoder_block(input_tensor, num_filters): encoder = conv_block(input_tensor, num_filters) encoder_pool = layers.MaxPooling2D((2, 2), strides=(2, 2))(encoder) return encoder_pool, encoder def decoder_block(input_tensor, concat_tensor, num_filters): decoder = layers.Conv2DTranspose(num_filters, (2, 2), strides=(2, 2), padding='same')(input_tensor) decoder = layers.concatenate([concat_tensor, decoder], axis=-1) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) return decoder def get_model(): inputs = layers.Input(shape=[None, None, len(config.BANDS)]) # 256 encoder0_pool, encoder0 = encoder_block(inputs, 32) # 128 encoder1_pool, encoder1 = encoder_block(encoder0_pool, 64) # 64 encoder2_pool, encoder2 = encoder_block(encoder1_pool, 128) # 32 encoder3_pool, encoder3 = encoder_block(encoder2_pool, 256) # 16 encoder4_pool, encoder4 = encoder_block(encoder3_pool, 512) # 8 center = conv_block(encoder4_pool, 1024) # center decoder4 = decoder_block(center, encoder4, 512) # 16 decoder3 = decoder_block(decoder4, encoder3, 256) # 32 decoder2 = decoder_block(decoder3, encoder2, 128) # 64 decoder1 = decoder_block(decoder2, encoder1, 64) # 128 decoder0 = decoder_block(decoder1, encoder0, 32) # 256 outputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(decoder0) model = models.Model(inputs=[inputs], outputs=[outputs]) model.compile( optimizer=optimizers.get(config.OPTIMIZER), loss=losses.get(config.LOSS), metrics=[metrics.get(metric) for metric in config.METRICS]) return model ###Output _____no_output_____ ###Markdown Verify that `model.py` is functioning as intended. ###Code from ai_platform_demo import model eval = model.get_eval_dataset() print(iter(eval.take(1)).next()) model = model.get_model() print(model.summary()) ###Output _____no_output_____ ###Markdown Training taskAt this stage, there should be `config.py` storing variables and `model.py` which has code for getting the training/evaluation data and the model. All that's left is code for training the model. The following will create `task.py`, which will get the training and eval data, train the model and save it when it's done in a Cloud Storage bucket. ###Code %%writefile {PACKAGE_PATH}/task.py from . import config from . import model import tensorflow as tf if __name__ == '__main__': training = model.get_training_dataset() evaluation = model.get_eval_dataset() m = model.get_model() m.fit( x=training, epochs=config.EPOCHS, steps_per_epoch=int(config.TRAIN_SIZE / config.BATCH_SIZE), validation_data=evaluation, validation_steps=int(config.EVAL_SIZE), callbacks=[tf.keras.callbacks.TensorBoard(config.LOGS_DIR)]) m.save(config.MODEL_DIR, save_format='tf') ###Output _____no_output_____ ###Markdown Submit the package to AI Platform for trainingNow there's everything to submit this job, which can be done from the command line. First, define some needed variables.**Note:** You need to insert the name of a Cloud project (below) you own! ###Code import time JOB_NAME = 'demo_training_job_' + str(int(time.time())) TRAINER_PACKAGE_PATH = 'ai_platform_demo' MAIN_TRAINER_MODULE = 'ai_platform_demo.task' REGION = 'us-central1' ###Output _____no_output_____ ###Markdown Now the training job is ready to be started. First, you need to enable the ML API for your project. This can be done from [this link to the Cloud Console](https://console.developers.google.com/apis/library/ml.googleapis.com). See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/training-jobs) for details. Note that the Python and Tensorflow versions should match what is used in the Colab notebook. ###Code !gcloud ai-platform jobs submit training {JOB_NAME} \ --job-dir {config.JOB_DIR} \ --package-path {TRAINER_PACKAGE_PATH} \ --module-name {MAIN_TRAINER_MODULE} \ --region {REGION} \ --project {config.PROJECT} \ --runtime-version 2.3 \ --python-version 3.7 \ --scale-tier basic-gpu ###Output _____no_output_____ ###Markdown Monitor the training jobThere's not much more to do until the model is finished training (~24 hours), but it's fun and useful to monitor its progress. You can do that programmatically with another `gcloud` command. The output of that command can be read into an `IPython.utils.text.SList` from which the `state` is extracted and ensured to be `SUCCEEDED`. Or you can monitor it from the [AI Platform jobs page](http://console.cloud.google.com/ai-platform/jobs) on the Cloud Console. ###Code desc = !gcloud ai-platform jobs describe {JOB_NAME} --project {PROJECT} state = desc.grep('state:')[0].split(':')[1].strip() print(state) ###Output _____no_output_____ ###Markdown Inspect the trained modelOnce the training job has finished, verify that you can load the trained model and print a summary of the fitted parameters. It's also useful to examine the logs with [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard). There's a convenient notebook extension that will launch TensorBoard in the Colab notebook. Examine the training and testing learning curves to ensure that the training process has converged. ###Code %load_ext tensorboard %tensorboard --logdir {config.LOGS_DIR} ###Output _____no_output_____ ###Markdown Prepare the model for making predictions in Earth EngineBefore we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accommodate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predictbinary_data_in_prediction_input) for details.) `earthengine model prepare`The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically: ###Code from tensorflow.python.tools import saved_model_utils meta_graph_def = saved_model_utils.get_meta_graph_def(config.MODEL_DIR, 'serve') inputs = meta_graph_def.signature_def['serving_default'].inputs outputs = meta_graph_def.signature_def['serving_default'].outputs # Just get the first thing(s) from the serving signature def. i.e. this # model only has a single input and a single output. input_name = None for k,v in inputs.items(): input_name = v.name break output_name = None for k,v in outputs.items(): output_name = v.name break # Make a dictionary that maps Earth Engine outputs and inputs to # AI Platform inputs and outputs, respectively. import json input_dict = "'" + json.dumps({input_name: "array"}) + "'" output_dict = "'" + json.dumps({output_name: "impervious"}) + "'" # You need to set the project before using the model prepare command. !earthengine set_project {PROJECT} !earthengine model prepare --source_dir {config.MODEL_DIR} --dest_dir {config.EEIFIED_DIR} --input {input_dict} --output {output_dict} ###Output _____no_output_____ ###Markdown Note that you can also use the TensorFlow saved model command line tool to do this manually. See [this doc](https://www.tensorflow.org/guide/saved_modelcli_to_inspect_and_execute_savedmodel) for details. Also note the names we've specified for the new inputs and outputs: `array` and `impervious`, respectively. Perform inference using the trained model in Earth EngineBefore it's possible to get predictions from the trained and EEified model, it needs to be deployed on AI Platform. The first step is to create the model. The second step is to create a version. See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) for details. Note that models and versions can be monitored from the [AI Platform models page](http://console.cloud.google.com/ai-platform/models) of the Cloud Console.To ensure that the model is ready for predictions without having to warm up nodes, you can use a configuration yaml file to set the scaling type of this version to `autoScaling`, and, set a minimum number of nodes for the version. This will ensure there are always nodes on stand-by, however, you will be charged as long as they are running. For this example, we'll set the `minNodes` to 10. That means that at a minimum, 10 nodes are always up and running and waiting for predictions. The number of nodes will also scale up automatically if needed. ###Code %%writefile config.yaml autoScaling: minNodes: 10 MODEL_NAME = 'fcnn_demo_model' VERSION_NAME = 'v' + str(int(time.time())) print('Creating version: ' + VERSION_NAME) !gcloud ai-platform models create {MODEL_NAME} \ --project {PROJECT} \ --region {REGION} !gcloud ai-platform versions create {VERSION_NAME} \ --project {config.PROJECT} \ --model {MODEL_NAME} \ --region {REGION} \ --origin {config.EEIFIED_DIR} \ --framework "TENSORFLOW" \ --runtime-version 2.3 \ --python-version 3.7 \ --config=config.yaml ###Output _____no_output_____ ###Markdown There is now a trained model, prepared for serving to Earth Engine, hosted and versioned on AI Platform. We can now connect Earth Engine directly to the trained model for inference. You do that with the `ee.Model.fromAiPlatformPredictor` command. `ee.Model.fromAiPlatformPredictor`For this command to work, we need to know a lot about the model. To connect to the model, you need to know the name and version. InputsYou need to be able to recreate the imagery on which it was trained in order to perform inference. Specifically, you need to create an array-valued input from the scaled data and use that for input. (Recall that the new input node is named `array`, which is convenient because the array image has one band, named `array` by default.) The inputs will be provided as 144x144 patches (`inputTileSize`), at 30-meter resolution (`proj`), but 8 pixels will be thrown out (`inputOverlapSize`) to minimize boundary effects. OutputsThe output (which you also need to know), is a single float band named `impervious`. ###Code # Use Landsat 8 surface reflectance data. l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') # Cloud masking function. def maskL8sr(image): cloudShadowBitMask = ee.Number(2).pow(3).int() cloudsBitMask = ee.Number(2).pow(5).int() qa = image.select('pixel_qa') mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And( qa.bitwiseAnd(cloudsBitMask).eq(0)) mask2 = image.mask().reduce('min') mask3 = image.select(config.opticalBands).gt(0).And( image.select(config.opticalBands).lt(10000)).reduce('min') mask = mask1.And(mask2).And(mask3) return image.select(config.opticalBands).divide(10000).addBands( image.select(config.thermalBands).divide(10).clamp(273.15, 373.15) .subtract(273.15).divide(100)).updateMask(mask) # The image input data is a cloud-masked median composite. image = l8sr.filterDate( '2015-01-01', '2017-12-31').map(maskL8sr).median().select(config.BANDS).float() # Load the trained model and use it for prediction. If you specified a region # other than the default (us-central1) at model creation, specify it here. model = ee.Model.fromAiPlatformPredictor( projectName = config.PROJECT, modelName = MODEL_NAME, version = VERSION_NAME, inputTileSize = [144, 144], inputOverlapSize = [8, 8], proj = ee.Projection('EPSG:4326').atScale(30), fixInputProj = True, outputBands = {'impervious': { 'type': ee.PixelType.float() } } ) predictions = model.predictImage(image.toArray()) # Use folium to visualize the input imagery and the predictions. mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3}) map = folium.Map(location=[38., -122.5], zoom_start=13) folium.TileLayer( tiles=mapid['tile_fetcher'].url_format, attr='Google Earth Engine', overlay=True, name='median composite', ).add_to(map) mapid = predictions.getMapId({'min': 0, 'max': 1}) folium.TileLayer( tiles=mapid['tile_fetcher'].url_format, attr='Google Earth Engine', overlay=True, name='predictions', ).add_to(map) map.add_child(folium.LayerControl()) map ###Output _____no_output_____ ###Markdown IntroductionThis is a demonstration notebook. Suppose you have developed a model the training of which is constrained by the resources available to the notbook VM. In that case, you may want to use the [Google AI Platform](https://cloud.google.com/ml-engine/docs/tensorflow/) to train your model. The advantage of that is that long-running or resource intensive training jobs can be performed in the background. Also, to use your trained model in Earth Engine, it needs to be [deployed as a hosted model](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) on AI Platform. This notebook uses previously created training data (see [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb)) and AI Platform to train a model, deploy it and use it to make predictions in Earth Engine. To do that, code [needs to be structured as a python package](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) that can be uploaded to AI Platform. The following cells produce that package programatically. Setup software librariesInstall needed libraries to the notebook VM. Authenticate as necessary. ###Code # Cloud authentication. from google.colab import auth auth.authenticate_user() # Earth Engine install to notebook VM, authenticate. !pip install earthengine-api # Import and initialize the Earth Engine library. import ee ee.Authenticate() ee.Initialize() # Tensorflow setup. import tensorflow as tf tf.enable_eager_execution() print(tf.__version__) # Folium setup. import folium print(folium.__version__) # Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' ###Output _____no_output_____ ###Markdown Training code package setupIt's necessary to create a Python package to hold the training code. Here we're going to get started with that by creating a folder for the package and adding an empty `__init__.py` file. ###Code PACKAGE_PATH = 'ai_platform_demo' !ls -l !mkdir {PACKAGE_PATH} !touch {PACKAGE_PATH}/__init__.py !ls -l {PACKAGE_PATH} ###Output _____no_output_____ ###Markdown VariablesThese variables need to be stored in a place where other code can access them. There are a variety of ways of accomplishing that, but here we'll use the `%%writefile` command to write the contents of the code cell to a file called `config.py`.**Note:** You need to insert the name of a bucket (below) to which you have write access! ###Code %%writefile {PACKAGE_PATH}/config.py import tensorflow as tf # INSERT YOUR BUCKET HERE! BUCKET = 'your-bucket-name' # Specify names of output locations in Cloud Storage. FOLDER = 'fcnn-demo' JOB_DIR = 'gs://' + BUCKET + '/' + FOLDER + '/trainer' MODEL_DIR = JOB_DIR + '/model' LOGS_DIR = JOB_DIR + '/logs' # Pre-computed training and eval data. DATA_BUCKET = 'ee-docs-demos' TRAINING_BASE = 'training_patches' EVAL_BASE = 'eval_patches' # Specify inputs (Landsat bands) to the model and the response variable. opticalBands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7'] thermalBands = ['B10', 'B11'] BANDS = opticalBands + thermalBands RESPONSE = 'impervious' FEATURES = BANDS + [RESPONSE] # Specify the size and shape of patches expected by the model. KERNEL_SIZE = 256 KERNEL_SHAPE = [KERNEL_SIZE, KERNEL_SIZE] COLUMNS = [ tf.io.FixedLenFeature(shape=KERNEL_SHAPE, dtype=tf.float32) for k in FEATURES ] FEATURES_DICT = dict(zip(FEATURES, COLUMNS)) # Sizes of the training and evaluation datasets. TRAIN_SIZE = 16000 EVAL_SIZE = 8000 # Specify model training parameters. BATCH_SIZE = 16 EPOCHS = 50 BUFFER_SIZE = 3000 OPTIMIZER = 'SGD' LOSS = 'MeanSquaredError' METRICS = ['RootMeanSquaredError'] ###Output _____no_output_____ ###Markdown Verify that the written file has the expected contents and is working as intended. ###Code !cat {PACKAGE_PATH}/config.py from ai_platform_demo import config print('\n\n', config.BATCH_SIZE) ###Output _____no_output_____ ###Markdown Training data, evaluation data and modelThe following is code to load training/evaluation data and the model. Write this into `model.py`. Note that these functions are developed and explained in [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb). The source of the model code is [this demonstration notebook](https://github.com/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb). ###Code %%writefile {PACKAGE_PATH}/model.py from . import config import tensorflow as tf from tensorflow.python.keras import layers from tensorflow.python.keras import losses from tensorflow.python.keras import metrics from tensorflow.python.keras import models from tensorflow.python.keras import optimizers # Dataset loading functions def parse_tfrecord(example_proto): return tf.io.parse_single_example(example_proto, config.FEATURES_DICT) def to_tuple(inputs): inputsList = [inputs.get(key) for key in config.FEATURES] stacked = tf.stack(inputsList, axis=0) stacked = tf.transpose(stacked, [1, 2, 0]) return stacked[:,:,:len(config.BANDS)], stacked[:,:,len(config.BANDS):] def get_dataset(pattern): glob = tf.io.gfile.glob(pattern) dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') dataset = dataset.map(parse_tfrecord) dataset = dataset.map(to_tuple) return dataset def get_training_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.TRAINING_BASE + '*' dataset = get_dataset(glob) dataset = dataset.shuffle(config.BUFFER_SIZE).batch(config.BATCH_SIZE).repeat() return dataset def get_eval_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.EVAL_BASE + '*' dataset = get_dataset(glob) dataset = dataset.batch(1).repeat() return dataset # A variant of the UNET model. def conv_block(input_tensor, num_filters): encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(input_tensor) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(encoder) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) return encoder def encoder_block(input_tensor, num_filters): encoder = conv_block(input_tensor, num_filters) encoder_pool = layers.MaxPooling2D((2, 2), strides=(2, 2))(encoder) return encoder_pool, encoder def decoder_block(input_tensor, concat_tensor, num_filters): decoder = layers.Conv2DTranspose(num_filters, (2, 2), strides=(2, 2), padding='same')(input_tensor) decoder = layers.concatenate([concat_tensor, decoder], axis=-1) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) return decoder def get_model(): inputs = layers.Input(shape=[None, None, len(config.BANDS)]) # 256 encoder0_pool, encoder0 = encoder_block(inputs, 32) # 128 encoder1_pool, encoder1 = encoder_block(encoder0_pool, 64) # 64 encoder2_pool, encoder2 = encoder_block(encoder1_pool, 128) # 32 encoder3_pool, encoder3 = encoder_block(encoder2_pool, 256) # 16 encoder4_pool, encoder4 = encoder_block(encoder3_pool, 512) # 8 center = conv_block(encoder4_pool, 1024) # center decoder4 = decoder_block(center, encoder4, 512) # 16 decoder3 = decoder_block(decoder4, encoder3, 256) # 32 decoder2 = decoder_block(decoder3, encoder2, 128) # 64 decoder1 = decoder_block(decoder2, encoder1, 64) # 128 decoder0 = decoder_block(decoder1, encoder0, 32) # 256 outputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(decoder0) model = models.Model(inputs=[inputs], outputs=[outputs]) model.compile( optimizer=optimizers.get(config.OPTIMIZER), loss=losses.get(config.LOSS), metrics=[metrics.get(metric) for metric in config.METRICS]) return model ###Output _____no_output_____ ###Markdown Verify that `model.py` is functioning as intended. ###Code from ai_platform_demo import model eval = model.get_eval_dataset() print(iter(eval.take(1)).next()) model = model.get_model() print(model.summary()) ###Output _____no_output_____ ###Markdown Training taskAt this stage, there should be `config.py` storing variables and `model.py` which has code for getting the training/evaluation data and the model. All that's left is code for training the model. The following will create `task.py`, which will get the training and eval data, train the model and save it when it's done in a Cloud Storage bucket. ###Code %%writefile {PACKAGE_PATH}/task.py from . import config from . import model import tensorflow as tf if __name__ == '__main__': training = model.get_training_dataset() evaluation = model.get_eval_dataset() m = model.get_model() m.fit( x=training, epochs=config.EPOCHS, steps_per_epoch=int(config.TRAIN_SIZE / config.BATCH_SIZE), validation_data=evaluation, validation_steps=int(config.EVAL_SIZE), callbacks=[tf.keras.callbacks.TensorBoard(config.LOGS_DIR)]) tf.contrib.saved_model.save_keras_model(m, config.MODEL_DIR) ###Output _____no_output_____ ###Markdown Submit the package to AI Platform for trainingNow there's everything to submit this job, which can be done from the command line. First, define some needed variables.**Note:** You need to insert the name of a Cloud project (below) you own! ###Code import time # INSERT YOUR PROJECT HERE! PROJECT = 'your-project' JOB_NAME = 'demo_training_job_' + str(int(time.time())) TRAINER_PACKAGE_PATH = 'ai_platform_demo' MAIN_TRAINER_MODULE = 'ai_platform_demo.task' REGION = 'us-central1' ###Output _____no_output_____ ###Markdown Now the training job is ready to be started. First, you need to enable the ML API for your project. This can be done from [this link to the Cloud Console](https://console.developers.google.com/apis/library/ml.googleapis.com). See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/training-jobs) for details. Note that the Python and Tensorflow versions should match what is used in the Colab notebook. ###Code !gcloud ai-platform jobs submit training {JOB_NAME} \ --job-dir {config.JOB_DIR} \ --package-path {TRAINER_PACKAGE_PATH} \ --module-name {MAIN_TRAINER_MODULE} \ --region {REGION} \ --project {PROJECT} \ --runtime-version 1.14 \ --python-version 3.5 \ --scale-tier basic-gpu ###Output _____no_output_____ ###Markdown Monitor the training jobThere's not much more to do until the model is finished training (~24 hours), but it's fun and useful to monitor its progress. You can do that progamatically with another `gcloud` command. The output of that command can be read into an `IPython.utils.text.SList` from which the `state` is extracted and ensured to be `SUCCEEDED`. Or you can monitor it from the [AI Platform jobs page](http://console.cloud.google.com/ai-platform/jobs) on the Cloud Console. ###Code desc = !gcloud ai-platform jobs describe {JOB_NAME} --project {PROJECT} state = desc.grep('state:')[0].split(':')[1].strip() print(state) ###Output _____no_output_____ ###Markdown Inspect the trained modelOnce the training job has finished, verify that you can load the trained model and print a summary of the fitted parameters. It's also useful to examine the logs with [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard). There's a convenient notebook extension that will launch TensorBoard in the Colab notebook. Examine the training and testing learning curves to ensure that the training process has converged. ###Code %load_ext tensorboard %tensorboard --logdir {config.LOGS_DIR} ###Output _____no_output_____ ###Markdown Prepare the model for making predictions in Earth EngineBefore we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accomdate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predictbinary_data_in_prediction_input) for details.) `earthengine model prepare`The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically: ###Code from tensorflow.python.tools import saved_model_utils meta_graph_def = saved_model_utils.get_meta_graph_def(config.MODEL_DIR, 'serve') inputs = meta_graph_def.signature_def['serving_default'].inputs outputs = meta_graph_def.signature_def['serving_default'].outputs # Just get the first thing(s) from the serving signature def. i.e. this # model only has a single input and a single output. input_name = None for k,v in inputs.items(): input_name = v.name break output_name = None for k,v in outputs.items(): output_name = v.name break # Make a dictionary that maps Earth Engine outputs and inputs to # AI Platform inputs and outputs, respectively. import json input_dict = "'" + json.dumps({input_name: "array"}) + "'" output_dict = "'" + json.dumps({output_name: "impervious"}) + "'" # Put the EEified model next to the trained model directory. EEIFIED_DIR = config.JOB_DIR + '/eeified' # You need to set the project before using the model prepare command. !earthengine set_project {PROJECT} !earthengine model prepare --source_dir {config.MODEL_DIR} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict} ###Output _____no_output_____ ###Markdown Note that you can also use the TensorFlow saved model command line tool to do this manually. See [this doc](https://www.tensorflow.org/guide/saved_modelcli_to_inspect_and_execute_savedmodel) for details. Also note the names we've specified for the new inputs and outputs: `array` and `impervious`, respectively. Perform inference using the trained model in Earth EngineBefore it's possible to get predictions from the trained and EEified model, it needs to be deployed on AI Platform. The first step is to create the model. The second step is to create a version. See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) for details. Note that models and versions can be monitored from the [AI Platform models page](http://console.cloud.google.com/ai-platform/models) of the Cloud Console. ###Code MODEL_NAME = 'fcnn_demo_model' VERSION_NAME = 'v' + str(int(time.time())) print('Creating version: ' + VERSION_NAME) !gcloud ai-platform models create {MODEL_NAME} --project {PROJECT} !gcloud ai-platform versions create {VERSION_NAME} \ --project {PROJECT} \ --model {MODEL_NAME} \ --origin {EEIFIED_DIR} \ --runtime-version=1.14 \ --framework "TENSORFLOW" \ --python-version=3.5 ###Output _____no_output_____ ###Markdown There is now a trained model, prepared for serving to Earth Engine, hosted and versioned on AI Platform. We can now connect Earth Engine directly to the trained model for inference. You do that with the `ee.Model.fromAiPlatformPredictor` command. `ee.Model.fromAiPlatformPredictor`For this command to work, we need to know a lot about the model. To connect to the model, you need to know the name and version. InputsYou need to be able to recreate the imagery on which it was trained in order to perform inference. Specifically, you need to create an array-valued input from the scaled data and use that for input. (Recall that the new input node is named `array`, which is convenient because the array image has one band, named `array` by default.) The inputs will be provided as 144x144 patches (`inputTileSize`), at 30-meter resolution (`proj`), but 8 pixels will be thrown out (`inputOverlapSize`) to minimize boundary effects. OutputsThe output (which you also need to know), is a single float band named `impervious`. ###Code # Use Landsat 8 surface reflectance data. l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') # Cloud masking function. def maskL8sr(image): cloudShadowBitMask = ee.Number(2).pow(3).int() cloudsBitMask = ee.Number(2).pow(5).int() qa = image.select('pixel_qa') mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And( qa.bitwiseAnd(cloudsBitMask).eq(0)) mask2 = image.mask().reduce('min') mask3 = image.select(config.opticalBands).gt(0).And( image.select(config.opticalBands).lt(10000)).reduce('min') mask = mask1.And(mask2).And(mask3) return image.select(config.opticalBands).divide(10000).addBands( image.select(config.thermalBands).divide(10).clamp(273.15, 373.15) .subtract(273.15).divide(100)).updateMask(mask) # The image input data is a cloud-masked median composite. image = l8sr.filterDate( '2015-01-01', '2017-12-31').map(maskL8sr).median().select(config.BANDS).float() # Load the trained model and use it for prediction. model = ee.Model.fromAiPlatformPredictor( projectName = PROJECT, modelName = MODEL_NAME, version = VERSION_NAME, inputTileSize = [144, 144], inputOverlapSize = [8, 8], proj = ee.Projection('EPSG:4326').atScale(30), fixInputProj = True, outputBands = {'impervious': { 'type': ee.PixelType.float() } } ) predictions = model.predictImage(image.toArray()) # Use folium to visualize the input imagery and the predictions. mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3}) map = folium.Map(location=[38., -122.5], zoom_start=13) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='median composite', ).add_to(map) mapid = predictions.getMapId({'min': 0, 'max': 1}) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='predictions', ).add_to(map) map.add_child(folium.LayerControl()) map ###Output _____no_output_____ ###Markdown Run in Google Colab View source on GitHub IntroductionThis is a demonstration notebook. Suppose you have developed a model the training of which is constrained by the resources available to the notbook VM. In that case, you may want to use the [Google AI Platform](https://cloud.google.com/ml-engine/docs/tensorflow/) to train your model. The advantage of that is that long-running or resource intensive training jobs can be performed in the background. Also, to use your trained model in Earth Engine, it needs to be [deployed as a hosted model](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) on AI Platform. This notebook uses previously created training data (see [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb)) and AI Platform to train a model, deploy it and use it to make predictions in Earth Engine. To do that, code [needs to be structured as a python package](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) that can be uploaded to AI Platform. The following cells produce that package programatically. Setup software librariesInstall needed libraries to the notebook VM. Authenticate as necessary. ###Code # Cloud authentication. from google.colab import auth auth.authenticate_user() # Import and initialize the Earth Engine library. import ee ee.Authenticate() ee.Initialize() # Tensorflow setup. import tensorflow as tf print(tf.__version__) # Folium setup. import folium print(folium.__version__) ###Output _____no_output_____ ###Markdown Training code package setupIt's necessary to create a Python package to hold the training code. Here we're going to get started with that by creating a folder for the package and adding an empty `__init__.py` file. ###Code PACKAGE_PATH = 'ai_platform_demo' !ls -l !mkdir {PACKAGE_PATH} !touch {PACKAGE_PATH}/__init__.py !ls -l {PACKAGE_PATH} ###Output _____no_output_____ ###Markdown VariablesThese variables need to be stored in a place where other code can access them. There are a variety of ways of accomplishing that, but here we'll use the `%%writefile` command to write the contents of the code cell to a file called `config.py`.**Note:** You need to insert the name of a bucket (below) to which you have write access! ###Code %%writefile {PACKAGE_PATH}/config.py import tensorflow as tf # INSERT YOUR PROJECT HERE! PROJECT = 'your-project' # INSERT YOUR BUCKET HERE! BUCKET = 'your-bucket' # This is a good region for hosting AI models. REGION = 'us-central1' # Specify names of output locations in Cloud Storage. FOLDER = 'fcnn-demo' JOB_DIR = 'gs://' + BUCKET + '/' + FOLDER + '/trainer' MODEL_DIR = JOB_DIR + '/model' LOGS_DIR = JOB_DIR + '/logs' # Put the EEified model next to the trained model directory. EEIFIED_DIR = JOB_DIR + '/eeified' # Pre-computed training and eval data. DATA_BUCKET = 'ee-docs-demos' TRAINING_BASE = 'training_patches' EVAL_BASE = 'eval_patches' # Specify inputs (Landsat bands) to the model and the response variable. opticalBands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7'] thermalBands = ['B10', 'B11'] BANDS = opticalBands + thermalBands RESPONSE = 'impervious' FEATURES = BANDS + [RESPONSE] # Specify the size and shape of patches expected by the model. KERNEL_SIZE = 256 KERNEL_SHAPE = [KERNEL_SIZE, KERNEL_SIZE] COLUMNS = [ tf.io.FixedLenFeature(shape=KERNEL_SHAPE, dtype=tf.float32) for k in FEATURES ] FEATURES_DICT = dict(zip(FEATURES, COLUMNS)) # Sizes of the training and evaluation datasets. TRAIN_SIZE = 16000 EVAL_SIZE = 8000 # Specify model training parameters. BATCH_SIZE = 16 EPOCHS = 50 BUFFER_SIZE = 3000 OPTIMIZER = 'SGD' LOSS = 'MeanSquaredError' METRICS = ['RootMeanSquaredError'] ###Output _____no_output_____ ###Markdown Verify that the written file has the expected contents and is working as intended. ###Code !cat {PACKAGE_PATH}/config.py from ai_platform_demo import config print('\n\n', config.BATCH_SIZE) ###Output _____no_output_____ ###Markdown Training data, evaluation data and modelThe following is code to load training/evaluation data and the model. Write this into `model.py`. Note that these functions are developed and explained in [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb). ###Code %%writefile {PACKAGE_PATH}/model.py from . import config import tensorflow as tf from tensorflow.python.keras import layers from tensorflow.python.keras import losses from tensorflow.python.keras import metrics from tensorflow.python.keras import models from tensorflow.python.keras import optimizers # Dataset loading functions def parse_tfrecord(example_proto): return tf.io.parse_single_example(example_proto, config.FEATURES_DICT) def to_tuple(inputs): inputsList = [inputs.get(key) for key in config.FEATURES] stacked = tf.stack(inputsList, axis=0) stacked = tf.transpose(stacked, [1, 2, 0]) return stacked[:,:,:len(config.BANDS)], stacked[:,:,len(config.BANDS):] def get_dataset(pattern): glob = tf.io.gfile.glob(pattern) dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') dataset = dataset.map(parse_tfrecord) dataset = dataset.map(to_tuple) return dataset def get_training_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.TRAINING_BASE + '*' dataset = get_dataset(glob) dataset = dataset.shuffle(config.BUFFER_SIZE).batch(config.BATCH_SIZE).repeat() return dataset def get_eval_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.EVAL_BASE + '*' dataset = get_dataset(glob) dataset = dataset.batch(1).repeat() return dataset # A variant of the UNET model. def conv_block(input_tensor, num_filters): encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(input_tensor) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(encoder) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) return encoder def encoder_block(input_tensor, num_filters): encoder = conv_block(input_tensor, num_filters) encoder_pool = layers.MaxPooling2D((2, 2), strides=(2, 2))(encoder) return encoder_pool, encoder def decoder_block(input_tensor, concat_tensor, num_filters): decoder = layers.Conv2DTranspose(num_filters, (2, 2), strides=(2, 2), padding='same')(input_tensor) decoder = layers.concatenate([concat_tensor, decoder], axis=-1) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) return decoder def get_model(): inputs = layers.Input(shape=[None, None, len(config.BANDS)]) # 256 encoder0_pool, encoder0 = encoder_block(inputs, 32) # 128 encoder1_pool, encoder1 = encoder_block(encoder0_pool, 64) # 64 encoder2_pool, encoder2 = encoder_block(encoder1_pool, 128) # 32 encoder3_pool, encoder3 = encoder_block(encoder2_pool, 256) # 16 encoder4_pool, encoder4 = encoder_block(encoder3_pool, 512) # 8 center = conv_block(encoder4_pool, 1024) # center decoder4 = decoder_block(center, encoder4, 512) # 16 decoder3 = decoder_block(decoder4, encoder3, 256) # 32 decoder2 = decoder_block(decoder3, encoder2, 128) # 64 decoder1 = decoder_block(decoder2, encoder1, 64) # 128 decoder0 = decoder_block(decoder1, encoder0, 32) # 256 outputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(decoder0) model = models.Model(inputs=[inputs], outputs=[outputs]) model.compile( optimizer=optimizers.get(config.OPTIMIZER), loss=losses.get(config.LOSS), metrics=[metrics.get(metric) for metric in config.METRICS]) return model ###Output _____no_output_____ ###Markdown Verify that `model.py` is functioning as intended. ###Code from ai_platform_demo import model eval = model.get_eval_dataset() print(iter(eval.take(1)).next()) model = model.get_model() print(model.summary()) ###Output _____no_output_____ ###Markdown Training taskAt this stage, there should be `config.py` storing variables and `model.py` which has code for getting the training/evaluation data and the model. All that's left is code for training the model. The following will create `task.py`, which will get the training and eval data, train the model and save it when it's done in a Cloud Storage bucket. ###Code %%writefile {PACKAGE_PATH}/task.py from . import config from . import model import tensorflow as tf if __name__ == '__main__': training = model.get_training_dataset() evaluation = model.get_eval_dataset() m = model.get_model() m.fit( x=training, epochs=config.EPOCHS, steps_per_epoch=int(config.TRAIN_SIZE / config.BATCH_SIZE), validation_data=evaluation, validation_steps=int(config.EVAL_SIZE), callbacks=[tf.keras.callbacks.TensorBoard(config.LOGS_DIR)]) m.save(config.MODEL_DIR, save_format='tf') ###Output _____no_output_____ ###Markdown Submit the package to AI Platform for trainingNow there's everything to submit this job, which can be done from the command line. First, define some needed variables.**Note:** You need to insert the name of a Cloud project (below) you own! ###Code import time JOB_NAME = 'demo_training_job_' + str(int(time.time())) TRAINER_PACKAGE_PATH = 'ai_platform_demo' MAIN_TRAINER_MODULE = 'ai_platform_demo.task' REGION = 'us-central1' ###Output _____no_output_____ ###Markdown Now the training job is ready to be started. First, you need to enable the ML API for your project. This can be done from [this link to the Cloud Console](https://console.developers.google.com/apis/library/ml.googleapis.com). See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/training-jobs) for details. Note that the Python and Tensorflow versions should match what is used in the Colab notebook. ###Code !gcloud ai-platform jobs submit training {JOB_NAME} \ --job-dir {config.JOB_DIR} \ --package-path {TRAINER_PACKAGE_PATH} \ --module-name {MAIN_TRAINER_MODULE} \ --region {REGION} \ --project {config.PROJECT} \ --runtime-version 2.3 \ --python-version 3.7 \ --scale-tier basic-gpu ###Output _____no_output_____ ###Markdown Monitor the training jobThere's not much more to do until the model is finished training (~24 hours), but it's fun and useful to monitor its progress. You can do that progamatically with another `gcloud` command. The output of that command can be read into an `IPython.utils.text.SList` from which the `state` is extracted and ensured to be `SUCCEEDED`. Or you can monitor it from the [AI Platform jobs page](http://console.cloud.google.com/ai-platform/jobs) on the Cloud Console. ###Code desc = !gcloud ai-platform jobs describe {JOB_NAME} --project {PROJECT} state = desc.grep('state:')[0].split(':')[1].strip() print(state) ###Output _____no_output_____ ###Markdown Inspect the trained modelOnce the training job has finished, verify that you can load the trained model and print a summary of the fitted parameters. It's also useful to examine the logs with [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard). There's a convenient notebook extension that will launch TensorBoard in the Colab notebook. Examine the training and testing learning curves to ensure that the training process has converged. ###Code %load_ext tensorboard %tensorboard --logdir {config.LOGS_DIR} ###Output _____no_output_____ ###Markdown Prepare the model for making predictions in Earth EngineBefore we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accomdate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predictbinary_data_in_prediction_input) for details.) `earthengine model prepare`The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically: ###Code from tensorflow.python.tools import saved_model_utils meta_graph_def = saved_model_utils.get_meta_graph_def(config.MODEL_DIR, 'serve') inputs = meta_graph_def.signature_def['serving_default'].inputs outputs = meta_graph_def.signature_def['serving_default'].outputs # Just get the first thing(s) from the serving signature def. i.e. this # model only has a single input and a single output. input_name = None for k,v in inputs.items(): input_name = v.name break output_name = None for k,v in outputs.items(): output_name = v.name break # Make a dictionary that maps Earth Engine outputs and inputs to # AI Platform inputs and outputs, respectively. import json input_dict = "'" + json.dumps({input_name: "array"}) + "'" output_dict = "'" + json.dumps({output_name: "impervious"}) + "'" # You need to set the project before using the model prepare command. !earthengine set_project {PROJECT} !earthengine model prepare --source_dir {config.MODEL_DIR} --dest_dir {config.EEIFIED_DIR} --input {input_dict} --output {output_dict} ###Output _____no_output_____ ###Markdown Note that you can also use the TensorFlow saved model command line tool to do this manually. See [this doc](https://www.tensorflow.org/guide/saved_modelcli_to_inspect_and_execute_savedmodel) for details. Also note the names we've specified for the new inputs and outputs: `array` and `impervious`, respectively. Perform inference using the trained model in Earth EngineBefore it's possible to get predictions from the trained and EEified model, it needs to be deployed on AI Platform. The first step is to create the model. The second step is to create a version. See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) for details. Note that models and versions can be monitored from the [AI Platform models page](http://console.cloud.google.com/ai-platform/models) of the Cloud Console.To ensure that the model is ready for predictions without having to warm up nodes, you can use a configuration yaml file to set the scaling type of this version to `autoScaling`, and, set a minimum number of nodes for the version. This will ensure there are always nodes on stand-by, however, you will be charged as long as they are running. For this example, we'll set the `minNodes` to 10. That means that at a minimum, 10 nodes are always up and running and waiting for predictions. The number of nodes will also scale up automatically if needed. ###Code %%writefile config.yaml autoScaling: minNodes: 10 MODEL_NAME = 'fcnn_demo_model' VERSION_NAME = 'v' + str(int(time.time())) print('Creating version: ' + VERSION_NAME) !gcloud ai-platform models create {MODEL_NAME} \ --project {PROJECT} \ --region {REGION} !gcloud ai-platform versions create {VERSION_NAME} \ --project {config.PROJECT} \ --model {MODEL_NAME} \ --region {REGION} \ --origin {config.EEIFIED_DIR} \ --framework "TENSORFLOW" \ --runtime-version 2.3 \ --python-version 3.7 \ --config=config.yaml ###Output _____no_output_____ ###Markdown There is now a trained model, prepared for serving to Earth Engine, hosted and versioned on AI Platform. We can now connect Earth Engine directly to the trained model for inference. You do that with the `ee.Model.fromAiPlatformPredictor` command. `ee.Model.fromAiPlatformPredictor`For this command to work, we need to know a lot about the model. To connect to the model, you need to know the name and version. InputsYou need to be able to recreate the imagery on which it was trained in order to perform inference. Specifically, you need to create an array-valued input from the scaled data and use that for input. (Recall that the new input node is named `array`, which is convenient because the array image has one band, named `array` by default.) The inputs will be provided as 144x144 patches (`inputTileSize`), at 30-meter resolution (`proj`), but 8 pixels will be thrown out (`inputOverlapSize`) to minimize boundary effects. OutputsThe output (which you also need to know), is a single float band named `impervious`. ###Code # Use Landsat 8 surface reflectance data. l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') # Cloud masking function. def maskL8sr(image): cloudShadowBitMask = ee.Number(2).pow(3).int() cloudsBitMask = ee.Number(2).pow(5).int() qa = image.select('pixel_qa') mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And( qa.bitwiseAnd(cloudsBitMask).eq(0)) mask2 = image.mask().reduce('min') mask3 = image.select(config.opticalBands).gt(0).And( image.select(config.opticalBands).lt(10000)).reduce('min') mask = mask1.And(mask2).And(mask3) return image.select(config.opticalBands).divide(10000).addBands( image.select(config.thermalBands).divide(10).clamp(273.15, 373.15) .subtract(273.15).divide(100)).updateMask(mask) # The image input data is a cloud-masked median composite. image = l8sr.filterDate( '2015-01-01', '2017-12-31').map(maskL8sr).median().select(config.BANDS).float() # Load the trained model and use it for prediction. If you specified a region # other than the default (us-central1) at model creation, specify it here. model = ee.Model.fromAiPlatformPredictor( projectName = config.PROJECT, modelName = MODEL_NAME, version = VERSION_NAME, inputTileSize = [144, 144], inputOverlapSize = [8, 8], proj = ee.Projection('EPSG:4326').atScale(30), fixInputProj = True, outputBands = {'impervious': { 'type': ee.PixelType.float() } } ) predictions = model.predictImage(image.toArray()) # Use folium to visualize the input imagery and the predictions. mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3}) map = folium.Map(location=[38., -122.5], zoom_start=13) folium.TileLayer( tiles=mapid['tile_fetcher'].url_format, attr='Google Earth Engine', overlay=True, name='median composite', ).add_to(map) mapid = predictions.getMapId({'min': 0, 'max': 1}) folium.TileLayer( tiles=mapid['tile_fetcher'].url_format, attr='Google Earth Engine', overlay=True, name='predictions', ).add_to(map) map.add_child(folium.LayerControl()) map ###Output _____no_output_____ ###Markdown Run in Google Colab View source on GitHub IntroductionThis is a demonstration notebook. Suppose you have developed a model the training of which is constrained by the resources available to the notbook VM. In that case, you may want to use the [Google AI Platform](https://cloud.google.com/ml-engine/docs/tensorflow/) to train your model. The advantage of that is that long-running or resource intensive training jobs can be performed in the background. Also, to use your trained model in Earth Engine, it needs to be [deployed as a hosted model](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) on AI Platform. This notebook uses previously created training data (see [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb)) and AI Platform to train a model, deploy it and use it to make predictions in Earth Engine. To do that, code [needs to be structured as a python package](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) that can be uploaded to AI Platform. The following cells produce that package programatically. Setup software librariesInstall needed libraries to the notebook VM. Authenticate as necessary. ###Code # Cloud authentication. from google.colab import auth auth.authenticate_user() # Import and initialize the Earth Engine library. import ee ee.Authenticate() ee.Initialize() # Tensorflow setup. import tensorflow as tf tf.enable_eager_execution() print(tf.__version__) # Folium setup. import folium print(folium.__version__) ###Output _____no_output_____ ###Markdown Training code package setupIt's necessary to create a Python package to hold the training code. Here we're going to get started with that by creating a folder for the package and adding an empty `__init__.py` file. ###Code PACKAGE_PATH = 'ai_platform_demo' !ls -l !mkdir {PACKAGE_PATH} !touch {PACKAGE_PATH}/__init__.py !ls -l {PACKAGE_PATH} ###Output _____no_output_____ ###Markdown VariablesThese variables need to be stored in a place where other code can access them. There are a variety of ways of accomplishing that, but here we'll use the `%%writefile` command to write the contents of the code cell to a file called `config.py`.**Note:** You need to insert the name of a bucket (below) to which you have write access! ###Code %%writefile {PACKAGE_PATH}/config.py import tensorflow as tf # INSERT YOUR BUCKET HERE! BUCKET = 'your-bucket-name' # Specify names of output locations in Cloud Storage. FOLDER = 'fcnn-demo' JOB_DIR = 'gs://' + BUCKET + '/' + FOLDER + '/trainer' MODEL_DIR = JOB_DIR + '/model' LOGS_DIR = JOB_DIR + '/logs' # Pre-computed training and eval data. DATA_BUCKET = 'ee-docs-demos' TRAINING_BASE = 'training_patches' EVAL_BASE = 'eval_patches' # Specify inputs (Landsat bands) to the model and the response variable. opticalBands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7'] thermalBands = ['B10', 'B11'] BANDS = opticalBands + thermalBands RESPONSE = 'impervious' FEATURES = BANDS + [RESPONSE] # Specify the size and shape of patches expected by the model. KERNEL_SIZE = 256 KERNEL_SHAPE = [KERNEL_SIZE, KERNEL_SIZE] COLUMNS = [ tf.io.FixedLenFeature(shape=KERNEL_SHAPE, dtype=tf.float32) for k in FEATURES ] FEATURES_DICT = dict(zip(FEATURES, COLUMNS)) # Sizes of the training and evaluation datasets. TRAIN_SIZE = 16000 EVAL_SIZE = 8000 # Specify model training parameters. BATCH_SIZE = 16 EPOCHS = 50 BUFFER_SIZE = 3000 OPTIMIZER = 'SGD' LOSS = 'MeanSquaredError' METRICS = ['RootMeanSquaredError'] ###Output _____no_output_____ ###Markdown Verify that the written file has the expected contents and is working as intended. ###Code !cat {PACKAGE_PATH}/config.py from ai_platform_demo import config print('\n\n', config.BATCH_SIZE) ###Output _____no_output_____ ###Markdown Training data, evaluation data and modelThe following is code to load training/evaluation data and the model. Write this into `model.py`. Note that these functions are developed and explained in [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb). ###Code %%writefile {PACKAGE_PATH}/model.py from . import config import tensorflow as tf from tensorflow.python.keras import layers from tensorflow.python.keras import losses from tensorflow.python.keras import metrics from tensorflow.python.keras import models from tensorflow.python.keras import optimizers # Dataset loading functions def parse_tfrecord(example_proto): return tf.io.parse_single_example(example_proto, config.FEATURES_DICT) def to_tuple(inputs): inputsList = [inputs.get(key) for key in config.FEATURES] stacked = tf.stack(inputsList, axis=0) stacked = tf.transpose(stacked, [1, 2, 0]) return stacked[:,:,:len(config.BANDS)], stacked[:,:,len(config.BANDS):] def get_dataset(pattern): glob = tf.io.gfile.glob(pattern) dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') dataset = dataset.map(parse_tfrecord) dataset = dataset.map(to_tuple) return dataset def get_training_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.TRAINING_BASE + '*' dataset = get_dataset(glob) dataset = dataset.shuffle(config.BUFFER_SIZE).batch(config.BATCH_SIZE).repeat() return dataset def get_eval_dataset(): glob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.EVAL_BASE + '*' dataset = get_dataset(glob) dataset = dataset.batch(1).repeat() return dataset # A variant of the UNET model. def conv_block(input_tensor, num_filters): encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(input_tensor) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(encoder) encoder = layers.BatchNormalization()(encoder) encoder = layers.Activation('relu')(encoder) return encoder def encoder_block(input_tensor, num_filters): encoder = conv_block(input_tensor, num_filters) encoder_pool = layers.MaxPooling2D((2, 2), strides=(2, 2))(encoder) return encoder_pool, encoder def decoder_block(input_tensor, concat_tensor, num_filters): decoder = layers.Conv2DTranspose(num_filters, (2, 2), strides=(2, 2), padding='same')(input_tensor) decoder = layers.concatenate([concat_tensor, decoder], axis=-1) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder) decoder = layers.BatchNormalization()(decoder) decoder = layers.Activation('relu')(decoder) return decoder def get_model(): inputs = layers.Input(shape=[None, None, len(config.BANDS)]) # 256 encoder0_pool, encoder0 = encoder_block(inputs, 32) # 128 encoder1_pool, encoder1 = encoder_block(encoder0_pool, 64) # 64 encoder2_pool, encoder2 = encoder_block(encoder1_pool, 128) # 32 encoder3_pool, encoder3 = encoder_block(encoder2_pool, 256) # 16 encoder4_pool, encoder4 = encoder_block(encoder3_pool, 512) # 8 center = conv_block(encoder4_pool, 1024) # center decoder4 = decoder_block(center, encoder4, 512) # 16 decoder3 = decoder_block(decoder4, encoder3, 256) # 32 decoder2 = decoder_block(decoder3, encoder2, 128) # 64 decoder1 = decoder_block(decoder2, encoder1, 64) # 128 decoder0 = decoder_block(decoder1, encoder0, 32) # 256 outputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(decoder0) model = models.Model(inputs=[inputs], outputs=[outputs]) model.compile( optimizer=optimizers.get(config.OPTIMIZER), loss=losses.get(config.LOSS), metrics=[metrics.get(metric) for metric in config.METRICS]) return model ###Output _____no_output_____ ###Markdown Verify that `model.py` is functioning as intended. ###Code from ai_platform_demo import model eval = model.get_eval_dataset() print(iter(eval.take(1)).next()) model = model.get_model() print(model.summary()) ###Output _____no_output_____ ###Markdown Training taskAt this stage, there should be `config.py` storing variables and `model.py` which has code for getting the training/evaluation data and the model. All that's left is code for training the model. The following will create `task.py`, which will get the training and eval data, train the model and save it when it's done in a Cloud Storage bucket. ###Code %%writefile {PACKAGE_PATH}/task.py from . import config from . import model import tensorflow as tf if __name__ == '__main__': training = model.get_training_dataset() evaluation = model.get_eval_dataset() m = model.get_model() m.fit( x=training, epochs=config.EPOCHS, steps_per_epoch=int(config.TRAIN_SIZE / config.BATCH_SIZE), validation_data=evaluation, validation_steps=int(config.EVAL_SIZE), callbacks=[tf.keras.callbacks.TensorBoard(config.LOGS_DIR)]) tf.contrib.saved_model.save_keras_model(m, config.MODEL_DIR) ###Output _____no_output_____ ###Markdown Submit the package to AI Platform for trainingNow there's everything to submit this job, which can be done from the command line. First, define some needed variables.**Note:** You need to insert the name of a Cloud project (below) you own! ###Code import time # INSERT YOUR PROJECT HERE! PROJECT = 'your-project' JOB_NAME = 'demo_training_job_' + str(int(time.time())) TRAINER_PACKAGE_PATH = 'ai_platform_demo' MAIN_TRAINER_MODULE = 'ai_platform_demo.task' REGION = 'us-central1' ###Output _____no_output_____ ###Markdown Now the training job is ready to be started. First, you need to enable the ML API for your project. This can be done from [this link to the Cloud Console](https://console.developers.google.com/apis/library/ml.googleapis.com). See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/training-jobs) for details. Note that the Python and Tensorflow versions should match what is used in the Colab notebook. ###Code !gcloud ai-platform jobs submit training {JOB_NAME} \ --job-dir {config.JOB_DIR} \ --package-path {TRAINER_PACKAGE_PATH} \ --module-name {MAIN_TRAINER_MODULE} \ --region {REGION} \ --project {PROJECT} \ --runtime-version 1.14 \ --python-version 3.5 \ --scale-tier basic-gpu ###Output _____no_output_____ ###Markdown Monitor the training jobThere's not much more to do until the model is finished training (~24 hours), but it's fun and useful to monitor its progress. You can do that progamatically with another `gcloud` command. The output of that command can be read into an `IPython.utils.text.SList` from which the `state` is extracted and ensured to be `SUCCEEDED`. Or you can monitor it from the [AI Platform jobs page](http://console.cloud.google.com/ai-platform/jobs) on the Cloud Console. ###Code desc = !gcloud ai-platform jobs describe {JOB_NAME} --project {PROJECT} state = desc.grep('state:')[0].split(':')[1].strip() print(state) ###Output _____no_output_____ ###Markdown Inspect the trained modelOnce the training job has finished, verify that you can load the trained model and print a summary of the fitted parameters. It's also useful to examine the logs with [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard). There's a convenient notebook extension that will launch TensorBoard in the Colab notebook. Examine the training and testing learning curves to ensure that the training process has converged. ###Code %load_ext tensorboard %tensorboard --logdir {config.LOGS_DIR} ###Output _____no_output_____ ###Markdown Prepare the model for making predictions in Earth EngineBefore we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accomdate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predictbinary_data_in_prediction_input) for details.) `earthengine model prepare`The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically: ###Code from tensorflow.python.tools import saved_model_utils meta_graph_def = saved_model_utils.get_meta_graph_def(config.MODEL_DIR, 'serve') inputs = meta_graph_def.signature_def['serving_default'].inputs outputs = meta_graph_def.signature_def['serving_default'].outputs # Just get the first thing(s) from the serving signature def. i.e. this # model only has a single input and a single output. input_name = None for k,v in inputs.items(): input_name = v.name break output_name = None for k,v in outputs.items(): output_name = v.name break # Make a dictionary that maps Earth Engine outputs and inputs to # AI Platform inputs and outputs, respectively. import json input_dict = "'" + json.dumps({input_name: "array"}) + "'" output_dict = "'" + json.dumps({output_name: "impervious"}) + "'" # Put the EEified model next to the trained model directory. EEIFIED_DIR = config.JOB_DIR + '/eeified' # You need to set the project before using the model prepare command. !earthengine set_project {PROJECT} !earthengine model prepare --source_dir {config.MODEL_DIR} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict} ###Output _____no_output_____ ###Markdown Note that you can also use the TensorFlow saved model command line tool to do this manually. See [this doc](https://www.tensorflow.org/guide/saved_modelcli_to_inspect_and_execute_savedmodel) for details. Also note the names we've specified for the new inputs and outputs: `array` and `impervious`, respectively. Perform inference using the trained model in Earth EngineBefore it's possible to get predictions from the trained and EEified model, it needs to be deployed on AI Platform. The first step is to create the model. The second step is to create a version. See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) for details. Note that models and versions can be monitored from the [AI Platform models page](http://console.cloud.google.com/ai-platform/models) of the Cloud Console. To ensure that the model is ready for predictions without having to warm up nodes, you can use a configuration yaml file to set the scaling type of this version to autoScaling, and, set a minimum number of nodes for the version. This will ensure there are always nodes on stand-by, however, you will be charged as long as they are running. For this example, we'll set the minNodes to 10. That means that at a minimum, 10 nodes are always up and running and waiting for predictions. The number of nodes will also scale up automatically if needed. ###Code %%writefile config.yaml autoScaling: minNodes: 10 MODEL_NAME = 'fcnn_demo_model' VERSION_NAME = 'v' + str(int(time.time())) print('Creating version: ' + VERSION_NAME) !gcloud ai-platform models create {MODEL_NAME} --project {PROJECT} !gcloud ai-platform versions create {VERSION_NAME} \ --project {PROJECT} \ --model {MODEL_NAME} \ --origin {EEIFIED_DIR} \ --runtime-version=1.14 \ --framework "TENSORFLOW" \ --python-version=3.5 --config=config.yaml ###Output _____no_output_____ ###Markdown There is now a trained model, prepared for serving to Earth Engine, hosted and versioned on AI Platform. We can now connect Earth Engine directly to the trained model for inference. You do that with the `ee.Model.fromAiPlatformPredictor` command. `ee.Model.fromAiPlatformPredictor`For this command to work, we need to know a lot about the model. To connect to the model, you need to know the name and version. InputsYou need to be able to recreate the imagery on which it was trained in order to perform inference. Specifically, you need to create an array-valued input from the scaled data and use that for input. (Recall that the new input node is named `array`, which is convenient because the array image has one band, named `array` by default.) The inputs will be provided as 144x144 patches (`inputTileSize`), at 30-meter resolution (`proj`), but 8 pixels will be thrown out (`inputOverlapSize`) to minimize boundary effects. OutputsThe output (which you also need to know), is a single float band named `impervious`. ###Code # Use Landsat 8 surface reflectance data. l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') # Cloud masking function. def maskL8sr(image): cloudShadowBitMask = ee.Number(2).pow(3).int() cloudsBitMask = ee.Number(2).pow(5).int() qa = image.select('pixel_qa') mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And( qa.bitwiseAnd(cloudsBitMask).eq(0)) mask2 = image.mask().reduce('min') mask3 = image.select(config.opticalBands).gt(0).And( image.select(config.opticalBands).lt(10000)).reduce('min') mask = mask1.And(mask2).And(mask3) return image.select(config.opticalBands).divide(10000).addBands( image.select(config.thermalBands).divide(10).clamp(273.15, 373.15) .subtract(273.15).divide(100)).updateMask(mask) # The image input data is a cloud-masked median composite. image = l8sr.filterDate( '2015-01-01', '2017-12-31').map(maskL8sr).median().select(config.BANDS).float() # Load the trained model and use it for prediction. model = ee.Model.fromAiPlatformPredictor( projectName = PROJECT, modelName = MODEL_NAME, version = VERSION_NAME, inputTileSize = [144, 144], inputOverlapSize = [8, 8], proj = ee.Projection('EPSG:4326').atScale(30), fixInputProj = True, outputBands = {'impervious': { 'type': ee.PixelType.float() } } ) predictions = model.predictImage(image.toArray()) # Use folium to visualize the input imagery and the predictions. map_id_dict = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3}) map = folium.Map(location=[38., -122.5], zoom_start=13) folium.TileLayer( tiles=map_id_dict['tile_fetcher'].url_format, attr='Map Data &copy; <a href="https://earthengine.google.com/">Google Earth Engine</a>', overlay=True, name='median composite', ).add_to(map) map_id_dict = predictions.getMapId({'min': 0, 'max': 1}) folium.TileLayer( tiles=map_id_dict['tile_fetcher'].url_format, attr='Map Data &copy; <a href="https://earthengine.google.com/">Google Earth Engine</a>', overlay=True, name='predictions', ).add_to(map) map.add_child(folium.LayerControl()) map ###Output _____no_output_____
docs/sphinx/jupyter/whats_new_v22.ipynb
###Markdown What's New in Marvin 2.2!Lots of things are new in Marvin 2.2.0. See the list with links to individual sections here http://sdss-marvin.readthedocs.io/en/latest/whats-new.html Marvin now includes MPL-6 data ###Code %matplotlib inline from marvin import config config.switchSasUrl('local') config.forceDbOff() from marvin.tools.cube import Cube plateifu='8485-1901' cube = Cube(plateifu=plateifu) print(cube) maps = cube.getMaps(bintype='HYB10') print(maps) ###Output WARNING: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead <Marvin Cube (plateifu='8485-1901', mode='local', data_origin='file')> <Marvin Maps (plateifu='8485-1901', mode='local', data_origin='file', bintype='HYB10', template='GAU-MILESHC')> ###Markdown Smarter handling of inputsYou can still specify **plateifu**, **mangaid**, or **filename** but now Marvin will try to guess your input type if you do not specify an input keyword argument. ###Code from marvin.tools.maps import Maps maps = Maps(plateifu) # or a filename maps = Maps('/Users/Brian/Work/Manga/analysis/v2_3_1/2.1.3/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz') print(maps) ###Output <Marvin Maps (plateifu='8485-1901', mode='local', data_origin='file', bintype='SPX', template='GAU-MILESHC')> ###Markdown Fuzzy indexing and extractionMarvin now includes fuzzy lists and dictionaries in the Maps and Datamodels. This means Marvin will try to guess what you mean by what you type. For example, all of these methods grab the H-alpha flux map. ###Code # grab an H-alpha flux map ha = maps['emline_gflux_ha_6564'] # fuzzy name indexing ha = maps['gflux_ha'] # all map properties are available as class attributes. If using iPython, you can tab complete to see them all. ha = maps.emline_gflux_ha_6564 ###Output WARNING: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead ###Markdown New DRP, DAP and Query DatamodelsThere are new datamodels representing the MaNGA data for DRP, DAP and Query parameters. The datamodel is attached to every object you instantiate, or it can be accessed independently. For example, the **Maps** datamodel will list all the available map properties. See http://sdss-marvin.readthedocs.io/en/latest/datamodel/datamodels.html for details. ###Code # see the datamodel on maps maps.datamodel ###Output _____no_output_____ ###Markdown Each **Property** contains a name, a channel, the unit of the property, and a description ###Code haew_prop = maps.datamodel['emline_gew_ha'] haew_prop print(haew_prop.name, haew_prop.unit, haew_prop.description) ###Output emline_gew Angstrom Gaussian-fitted equivalent widths measurements (based on EMLINE_GFLUX) ###Markdown The fulll datamodel is available as a **parent** attribute or you can import it directly ###Code dapdm = maps.datamodel.parent print(dapdm) # get a list of all available DAP datamodels from marvin.utils.datamodel.dap import datamodel print(datamodel) # let's get the MPL-6 datamodel dapdm = datamodel['MPL-6'] print(dapdm) ###Output <DAPDataModel release='2.1.3', n_bintypes=5, n_templates=1, n_properties=292> [<DAPDataModel release='1.1.1', n_bintypes=3, n_templates=3, n_properties=92>, <DAPDataModel release='2.0.2', n_bintypes=4, n_templates=1, n_properties=151>, <DAPDataModel release='2.1.3', n_bintypes=5, n_templates=1, n_properties=292>] <DAPDataModel release='2.1.3', n_bintypes=5, n_templates=1, n_properties=292> ###Markdown Cubes, Maps, ModelCubes now utilize Quantity-based ObjectsMost Marvin Tools now use new objects to represent their data. **DataCubes** represent 3-d data, while a **Spectrum** represents a 1-d array of data. These sub-class from Astropy Quantities. This means now most properties have associated units. We also now track and propagate inverse variances and masks. ###Code # the cube datamodel shows the available datacubes cube.datamodel.datacubes # and spectra cube.datamodel.spectra ###Output _____no_output_____ ###Markdown The cube flux is now a **DataCube**, has proper units, has an ivar, mask, and wavelength attached to it ###Code print(type(cube.flux)) print('flux', cube.flux) print('mask', cube.flux.mask) print('wavelength', cube.flux.wavelength) ###Output <class 'marvin.tools.quantities.datacube.DataCube'> flux [[[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] ... [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]]] 1e-17 erg / (Angstrom cm2 s spaxel) mask [[[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] ... [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]]] wavelength [ 3621.59598486 3622.42998417 3623.26417553 ... 10349.03843826 10351.42166679 10353.80544415] Angstrom ###Markdown Slicing a **Datacube** in 2-d will return a new **DataCube**, while slicing in 3-d will return a **Spectrum** ###Code spec = cube.flux[:,17,17] print(type(spec)) print(spec) print(spec.unit) spec.plot() ###Output <class 'marvin.tools.quantities.spectrum.Spectrum'> [0.54676276 0.46566465 0.4622981 ... 0. 0. 0. ] 1e-17 erg / (Angstrom cm2 s spaxel) 1e-17 erg / (Angstrom cm2 s spaxel) ###Markdown MaskbitsThere is a new Maskbit class for improved maskbit handling. All objects now include new **Maskbit** versions of the DRP/DAP quality flag (**quality_flag**), targeting bits (**target_flags**), and pixel masks (**pixmask**). Now you can easily look up the labels for bits and create custom masks. See http://sdss-marvin.readthedocs.io/en/latest/utils/maskbit.html for details ###Code # H-alpha DAP quality flag ha.quality_flag ha.target_flags ha.pixmask # bits for mask value 1027 print('bits', ha.pixmask.values_to_bits(1027)) print('labels', ha.pixmask.values_to_labels(1027)) # convert the H-alpha mask into an list of labels ha.pixmask.labels ###Output _____no_output_____ ###Markdown Improved Query and Results HandlingThe handling of Queries and Results has been improved to provider better means of retrieving all the results of a query, extracting columns of parameters, and quickly plotting results. * See http://sdss-marvin.readthedocs.io/en/latest/query.html for Query handling* See http://sdss-marvin.readthedocs.io/en/latest/results.html for Results handling* See http://sdss-marvin.readthedocs.io/en/latest/datamodel/query_dm.html for how to use the Query Datamodel* See http://sdss-marvin.readthedocs.io/en/latest/utils/plot-scatter.html for quick scatter plotting* See http://sdss-marvin.readthedocs.io/en/latest/utils/plot-hist.html for quick histogram plotting ###Code from marvin.tools.query import Query config.setRelease('MPL-4') q = Query(searchfilter='nsa.z < 0.1', returnparams=['cube.ra', 'cube.dec', 'absmag_g_r', 'nsa.elpetro_ba']) r = q.run() # your results are now in Sets r.results # see the available columns r.columns # quickly plot the redshift vs g-r color output = r.plot('nsa.z', 'absmag_g_r') # or a histogram of the elpetro b/a axis ratio output=r.hist('elpetro_ba') # get all of the g-r colors as a list gr = r.getListOf('absmag_g_r', return_all=True) gr # the results currently only have 100 out of some total print(r.count, r.totalcount) # let's extend our result set by the next chunk of 100 r.extendSet() print(r.count, r.totalcount) print(r.results) ###Output 200 1282 <ResultSet(set=1.0/7, index=0:200, count_in_set=200, total=1282)> [ResultRow(mangaid='1-109394', plate=8082, plateifu='8082-9102', ifu_name='9102', ra=50.179936141, dec=-1.0022917898, elpetro_absmag_g_r=1.26038932800293, elpetro_ba=0.42712, z=0.0361073), ResultRow(mangaid='1-113208', plate=8618, plateifu='8618-3701', ifu_name='3701', ra=317.504479435, dec=9.86822191739, elpetro_absmag_g_r=1.48788070678711, elpetro_ba=0.752286, z=0.0699044), ResultRow(mangaid='1-113219', plate=7815, plateifu='7815-9102', ifu_name='9102', ra=317.374745914, dec=10.0519434342, elpetro_absmag_g_r=0.543312072753906, elpetro_ba=0.517058, z=0.0408897), ResultRow(mangaid='1-113375', plate=7815, plateifu='7815-9101', ifu_name='9101', ra=316.639658795, dec=10.7512221884, elpetro_absmag_g_r=0.757579803466797, elpetro_ba=0.570455, z=0.028215), ResultRow(mangaid='1-113379', plate=7815, plateifu='7815-6101', ifu_name='6101', ra=316.541566803, dec=10.3454195236, elpetro_absmag_g_r=1.09770011901855, elpetro_ba=0.373641, z=0.0171611), ResultRow(mangaid='1-113403', plate=7815, plateifu='7815-12703', ifu_name='12703', ra=316.964281103, dec=11.2623177305, elpetro_absmag_g_r=0.745466232299805, elpetro_ba=0.823788, z=0.0715126), ResultRow(mangaid='1-113418', plate=7815, plateifu='7815-12704', ifu_name='12704', ra=319.353761201, dec=10.2316206875, elpetro_absmag_g_r=1.44098854064941, elpetro_ba=0.456991, z=0.0430806), ResultRow(mangaid='1-113469', plate=7815, plateifu='7815-12702', ifu_name='12702', ra=317.943526819, dec=9.27749462963, elpetro_absmag_g_r=0.847789764404297, elpetro_ba=0.522312, z=0.0394617), ResultRow(mangaid='1-113520', plate=7815, plateifu='7815-1901', ifu_name='1901', ra=317.502202242, dec=11.5106477077, elpetro_absmag_g_r=1.7510347366333, elpetro_ba=0.751988, z=0.0167652), ResultRow(mangaid='1-113525', plate=8618, plateifu='8618-6103', ifu_name='6103', ra=317.430068351, dec=11.3552406345, elpetro_absmag_g_r=1.57906627655029, elpetro_ba=0.78557, z=0.0169457), ResultRow(mangaid='1-113525', plate=7815, plateifu='7815-1902', ifu_name='1902', ra=317.430068351, dec=11.3552406345, elpetro_absmag_g_r=1.57906627655029, elpetro_ba=0.78557, z=0.0169457), ResultRow(mangaid='1-113539', plate=8618, plateifu='8618-12701', ifu_name='12701', ra=317.979595193, dec=11.3794496273, elpetro_absmag_g_r=1.26716613769531, elpetro_ba=0.31432, z=0.0177002), ResultRow(mangaid='1-113540', plate=7815, plateifu='7815-3702', ifu_name='3702', ra=317.903201533, dec=11.4969433994, elpetro_absmag_g_r=0.952407836914062, elpetro_ba=0.889156, z=0.0293823), ResultRow(mangaid='1-113567', plate=8618, plateifu='8618-1902', ifu_name='1902', ra=318.026426419, dec=11.3451572409, elpetro_absmag_g_r=1.41732978820801, elpetro_ba=0.515994, z=0.0167432), ResultRow(mangaid='1-113567', plate=7815, plateifu='7815-12701', ifu_name='12701', ra=318.026426419, dec=11.3451572409, elpetro_absmag_g_r=1.41732978820801, elpetro_ba=0.515994, z=0.0167432), ResultRow(mangaid='1-113585', plate=7815, plateifu='7815-3703', ifu_name='3703', ra=319.11342841, dec=10.7676202056, elpetro_absmag_g_r=1.68158912658691, elpetro_ba=0.773512, z=0.070276), ResultRow(mangaid='1-113587', plate=8618, plateifu='8618-12704', ifu_name='12704', ra=319.273361936, dec=11.1201347053, elpetro_absmag_g_r=1.02355575561523, elpetro_ba=0.858524, z=0.0704926), ResultRow(mangaid='1-113647', plate=8618, plateifu='8618-6104', ifu_name='6104', ra=319.814830226, dec=10.070628454, elpetro_absmag_g_r=1.78754997253418, elpetro_ba=0.850177, z=0.0738563), ResultRow(mangaid='1-113651', plate=7815, plateifu='7815-3704', ifu_name='3704', ra=319.233949063, dec=9.63757525774, elpetro_absmag_g_r=1.4986743927002, elpetro_ba=0.941069, z=0.0708847), ResultRow(mangaid='1-113654', plate=8618, plateifu='8618-9102', ifu_name='9102', ra=319.271463809, dec=9.9723035679, elpetro_absmag_g_r=1.10831832885742, elpetro_ba=0.451358, z=0.0430694), ResultRow(mangaid='1-113663', plate=8618, plateifu='8618-3703', ifu_name='3703', ra=318.804558778, dec=9.91312455151, elpetro_absmag_g_r=2.80322933197021, elpetro_ba=0.502782, z=0.0316328), ResultRow(mangaid='1-113672', plate=8618, plateifu='8618-3704', ifu_name='3704', ra=318.862286217, dec=9.75781705378, elpetro_absmag_g_r=1.25676536560059, elpetro_ba=0.984299, z=0.0702278), ResultRow(mangaid='1-113698', plate=8618, plateifu='8618-1901', ifu_name='1901', ra=319.194045241, dec=11.5400106533, elpetro_absmag_g_r=0.995195388793945, elpetro_ba=0.567433, z=0.0167445), ResultRow(mangaid='1-113700', plate=8618, plateifu='8618-12703', ifu_name='12703', ra=319.451824118, dec=11.6605961542, elpetro_absmag_g_r=0.61408805847168, elpetro_ba=0.751346, z=0.0378372), ResultRow(mangaid='1-113712', plate=7815, plateifu='7815-6104', ifu_name='6104', ra=319.193098655, dec=11.0437407875, elpetro_absmag_g_r=0.69244384765625, elpetro_ba=0.942534, z=0.0806967), ResultRow(mangaid='1-114073', plate=7975, plateifu='7975-12705', ifu_name='12705', ra=324.895915071, dec=11.2049630634, elpetro_absmag_g_r=0.751516342163086, elpetro_ba=0.775431, z=0.0402895), ResultRow(mangaid='1-114082', plate=7975, plateifu='7975-3701', ifu_name='3701', ra=324.152525127, dec=10.5067325085, elpetro_absmag_g_r=1.44381332397461, elpetro_ba=0.425806, z=0.0402683), ResultRow(mangaid='1-114121', plate=7975, plateifu='7975-12701', ifu_name='12701', ra=323.466394588, dec=10.0718531123, elpetro_absmag_g_r=1.43171119689941, elpetro_ba=0.520187, z=0.0879313), ResultRow(mangaid='1-114128', plate=7975, plateifu='7975-6101', ifu_name='6101', ra=323.470604621, dec=10.4397349551, elpetro_absmag_g_r=1.86342239379883, elpetro_ba=0.864153, z=0.077875), ResultRow(mangaid='1-114129', plate=7975, plateifu='7975-12702', ifu_name='12702', ra=323.521211519, dec=10.4218555682, elpetro_absmag_g_r=2.19032287597656, elpetro_ba=0.521832, z=0.0774097), ResultRow(mangaid='1-114145', plate=7975, plateifu='7975-6102', ifu_name='6102', ra=323.577092837, dec=11.2143239831, elpetro_absmag_g_r=1.41496467590332, elpetro_ba=0.655866, z=0.0341885), ResultRow(mangaid='1-114171', plate=7975, plateifu='7975-3702', ifu_name='3702', ra=323.296326308, dec=10.6442039273, elpetro_absmag_g_r=1.70641708374023, elpetro_ba=0.849777, z=0.0881405), ResultRow(mangaid='1-114303', plate=7975, plateifu='7975-1901', ifu_name='1901', ra=323.65768, dec=11.42181, elpetro_absmag_g_r=0.658689498901367, elpetro_ba=0.505907, z=0.0220107), ResultRow(mangaid='1-114306', plate=7975, plateifu='7975-9101', ifu_name='9101', ra=323.742750886, dec=11.296528361, elpetro_absmag_g_r=0.99525260925293, elpetro_ba=0.811891, z=0.0636505), ResultRow(mangaid='1-114325', plate=7975, plateifu='7975-12703', ifu_name='12703', ra=324.094963475, dec=12.2363038289, elpetro_absmag_g_r=1.34337997436523, elpetro_ba=0.244175, z=0.0288791), ResultRow(mangaid='1-114334', plate=7975, plateifu='7975-1902', ifu_name='1902', ra=324.259707865, dec=11.9062032693, elpetro_absmag_g_r=1.43183898925781, elpetro_ba=0.56156, z=0.0222473), ResultRow(mangaid='1-114454', plate=7975, plateifu='7975-12704', ifu_name='12704', ra=324.586417578, dec=11.3486728499, elpetro_absmag_g_r=1.29723358154297, elpetro_ba=0.591206, z=0.0888606), ResultRow(mangaid='1-114465', plate=7975, plateifu='7975-6104', ifu_name='6104', ra=324.89155826, dec=10.4834807378, elpetro_absmag_g_r=1.21394157409668, elpetro_ba=0.867381, z=0.0788547), ResultRow(mangaid='1-114500', plate=7975, plateifu='7975-9102', ifu_name='9102', ra=324.548678082, dec=12.1942577854, elpetro_absmag_g_r=1.14164924621582, elpetro_ba=0.355321, z=0.0220849), ResultRow(mangaid='1-114502', plate=7975, plateifu='7975-6103', ifu_name='6103', ra=324.799320383, dec=11.9393222318, elpetro_absmag_g_r=1.4673023223877, elpetro_ba=0.960909, z=0.0798058), ResultRow(mangaid='1-114532', plate=7975, plateifu='7975-3703', ifu_name='3703', ra=325.161350811, dec=11.7227434323, elpetro_absmag_g_r=1.73165702819824, elpetro_ba=0.920698, z=0.0902261), ResultRow(mangaid='1-114928', plate=7977, plateifu='7977-3702', ifu_name='3702', ra=331.080925269, dec=12.9683778244, elpetro_absmag_g_r=1.65719413757324, elpetro_ba=0.680598, z=0.0273478), ResultRow(mangaid='1-114955', plate=7977, plateifu='7977-12701', ifu_name='12701', ra=332.602089837, dec=11.7130772993, elpetro_absmag_g_r=1.01249313354492, elpetro_ba=0.742333, z=0.0922799), ResultRow(mangaid='1-114956', plate=7977, plateifu='7977-3704', ifu_name='3704', ra=332.798726703, dec=11.8007324019, elpetro_absmag_g_r=1.3456974029541, elpetro_ba=0.756417, z=0.0270248), ResultRow(mangaid='1-114980', plate=7977, plateifu='7977-9102', ifu_name='9102', ra=332.83066426, dec=12.1847175842, elpetro_absmag_g_r=1.14808464050293, elpetro_ba=0.656607, z=0.0630915), ResultRow(mangaid='1-114998', plate=7977, plateifu='7977-6102', ifu_name='6102', ra=332.756351306, dec=12.3743026872, elpetro_absmag_g_r=2.77035713195801, elpetro_ba=0.6304, z=0.0614042), ResultRow(mangaid='1-115062', plate=7977, plateifu='7977-1901', ifu_name='1901', ra=330.855372733, dec=12.6758983985, elpetro_absmag_g_r=1.65952682495117, elpetro_ba=0.865932, z=0.0260569), ResultRow(mangaid='1-115085', plate=7977, plateifu='7977-6103', ifu_name='6103', ra=331.802634213, dec=13.2660525434, elpetro_absmag_g_r=0.912630081176758, elpetro_ba=0.472784, z=0.0349304), ResultRow(mangaid='1-115097', plate=7977, plateifu='7977-3701', ifu_name='3701', ra=332.203447059, dec=13.3647373417, elpetro_absmag_g_r=1.49947357177734, elpetro_ba=0.528689, z=0.0274473), ResultRow(mangaid='1-115128', plate=7977, plateifu='7977-1902', ifu_name='1902', ra=332.481316937, dec=12.8180504327, elpetro_absmag_g_r=1.1044979095459, elpetro_ba=0.49669, z=0.0358116), ResultRow(mangaid='1-115162', plate=7977, plateifu='7977-12703', ifu_name='12703', ra=333.201842347, dec=13.334120927, elpetro_absmag_g_r=1.13131713867188, elpetro_ba=0.479943, z=0.0738627), ResultRow(mangaid='1-115320', plate=7977, plateifu='7977-3703', ifu_name='3703', ra=333.052045245, dec=12.205190661, elpetro_absmag_g_r=0.99519157409668, elpetro_ba=0.842721, z=0.0275274), ResultRow(mangaid='1-124604', plate=8439, plateifu='8439-6103', ifu_name='6103', ra=141.34417921, dec=50.5536812778, elpetro_absmag_g_r=1.38611221313477, elpetro_ba=0.345553, z=0.0253001), ResultRow(mangaid='1-133922', plate=8486, plateifu='8486-6104', ifu_name='6104', ra=239.195689664, dec=47.9955208307, elpetro_absmag_g_r=1.51949119567871, elpetro_ba=0.390132, z=0.0174718), ResultRow(mangaid='1-133941', plate=8486, plateifu='8486-9102', ifu_name='9102', ra=239.030589848, dec=48.0308761201, elpetro_absmag_g_r=1.04214859008789, elpetro_ba=0.740501, z=0.0189045), ResultRow(mangaid='1-133945', plate=8486, plateifu='8486-3703', ifu_name='3703', ra=238.881357667, dec=47.677310104, elpetro_absmag_g_r=1.70501899719238, elpetro_ba=0.75216, z=0.0183248), ResultRow(mangaid='1-133948', plate=8486, plateifu='8486-6103', ifu_name='6103', ra=238.891298957, dec=48.0223923799, elpetro_absmag_g_r=1.62374401092529, elpetro_ba=0.662078, z=0.0195194), ResultRow(mangaid='1-133976', plate=8486, plateifu='8486-9101', ifu_name='9101', ra=238.718472619, dec=47.8808922742, elpetro_absmag_g_r=1.26091766357422, elpetro_ba=0.627185, z=0.0182938), ResultRow(mangaid='1-133987', plate=8486, plateifu='8486-1902', ifu_name='1902', ra=239.334163047, dec=48.2072621316, elpetro_absmag_g_r=1.73217391967773, elpetro_ba=0.902851, z=0.0195435), ResultRow(mangaid='1-134004', plate=8486, plateifu='8486-1901', ifu_name='1901', ra=238.448582292, dec=47.4049584412, elpetro_absmag_g_r=1.27153015136719, elpetro_ba=0.667273, z=0.0185601), ResultRow(mangaid='1-134020', plate=8486, plateifu='8486-6102', ifu_name='6102', ra=238.046893627, dec=48.0439162921, elpetro_absmag_g_r=1.4318904876709, elpetro_ba=0.452976, z=0.0193267), ResultRow(mangaid='1-134209', plate=8549, plateifu='8549-9101', ifu_name='9101', ra=242.276471895, dec=46.6712048189, elpetro_absmag_g_r=1.46211814880371, elpetro_ba=0.938842, z=0.0545042), ResultRow(mangaid='1-134239', plate=8549, plateifu='8549-3703', ifu_name='3703', ra=241.416442386, dec=46.8465606897, elpetro_absmag_g_r=1.20720481872559, elpetro_ba=0.840219, z=0.0571086), ResultRow(mangaid='1-134248', plate=8549, plateifu='8549-3702', ifu_name='3702', ra=241.005278975, dec=46.8029102028, elpetro_absmag_g_r=1.04830741882324, elpetro_ba=0.603141, z=0.0212204), ResultRow(mangaid='1-134293', plate=8549, plateifu='8549-6103', ifu_name='6103', ra=240.418740846, dec=46.085291751, elpetro_absmag_g_r=0.724908828735352, elpetro_ba=0.685683, z=0.0416784), ResultRow(mangaid='1-134503', plate=8555, plateifu='8555-1901', ifu_name='1901', ra=243.873718478, dec=44.2912632693, elpetro_absmag_g_r=1.38505744934082, elpetro_ba=0.580866, z=0.0371472), ResultRow(mangaid='1-134562', plate=8549, plateifu='8549-1902', ifu_name='1902', ra=242.727439731, dec=44.985695801, elpetro_absmag_g_r=0.999540328979492, elpetro_ba=0.709542, z=0.0355137), ResultRow(mangaid='1-134597', plate=8549, plateifu='8549-12705', ifu_name='12705', ra=241.907223711, dec=45.0653702307, elpetro_absmag_g_r=1.32281875610352, elpetro_ba=0.493211, z=0.0441938), ResultRow(mangaid='1-134599', plate=8549, plateifu='8549-12704', ifu_name='12704', ra=242.978644743, dec=46.1277269855, elpetro_absmag_g_r=1.2156925201416, elpetro_ba=0.347987, z=0.019658), ResultRow(mangaid='1-134614', plate=8549, plateifu='8549-6102', ifu_name='6102', ra=243.009178672, dec=45.7750314981, elpetro_absmag_g_r=1.25503730773926, elpetro_ba=0.409631, z=0.0528277), ResultRow(mangaid='1-134634', plate=8549, plateifu='8549-3704', ifu_name='3704', ra=243.18537291, dec=45.3520102657, elpetro_absmag_g_r=1.71317291259766, elpetro_ba=0.601301, z=0.0523251), ResultRow(mangaid='1-134848', plate=8555, plateifu='8555-12703', ifu_name='12703', ra=244.331994382, dec=43.4796723691, elpetro_absmag_g_r=1.4580078125, elpetro_ba=0.276868, z=0.0584495), ResultRow(mangaid='1-134924', plate=8555, plateifu='8555-9101', ifu_name='9101', ra=245.662015493, dec=43.4646577078, elpetro_absmag_g_r=1.76020240783691, elpetro_ba=0.819258, z=0.0319997), ResultRow(mangaid='1-134954', plate=8555, plateifu='8555-12705', ifu_name='12705', ra=246.578190983, dec=43.4074643202, elpetro_absmag_g_r=1.38137054443359, elpetro_ba=0.692219, z=0.0315232), ResultRow(mangaid='1-134964', plate=8555, plateifu='8555-3701', ifu_name='3701', ra=246.760690284, dec=43.4760996734, elpetro_absmag_g_r=1.5971508026123, elpetro_ba=0.853938, z=0.0462348), ResultRow(mangaid='1-135030', plate=8603, plateifu='8603-12704', ifu_name='12704', ra=247.893876589, dec=40.5655973228, elpetro_absmag_g_r=1.31695175170898, elpetro_ba=0.700621, z=0.0273289), ResultRow(mangaid='1-135054', plate=8550, plateifu='8550-12703', ifu_name='12703', ra=247.674430234, dec=40.5293893805, elpetro_absmag_g_r=1.34156799316406, elpetro_ba=0.853565, z=0.0298122), ResultRow(mangaid='1-135055', plate=8601, plateifu='8601-6104', ifu_name='6104', ra=247.641287575, dec=40.5394009252, elpetro_absmag_g_r=1.68307113647461, elpetro_ba=0.808577, z=0.0300581), ResultRow(mangaid='1-135057', plate=8601, plateifu='8601-12703', ifu_name='12703', ra=247.57407, dec=40.59861, elpetro_absmag_g_r=0.928314208984375, elpetro_ba=0.834526, z=0.0288518), ResultRow(mangaid='1-135058', plate=8603, plateifu='8603-6103', ifu_name='6103', ra=247.800367796, dec=40.4218744432, elpetro_absmag_g_r=1.1861629486084, elpetro_ba=0.392703, z=0.0270087), ResultRow(mangaid='1-135077', plate=8312, plateifu='8312-6104', ifu_name='6104', ra=247.638466864, dec=41.4385861863, elpetro_absmag_g_r=1.33458137512207, elpetro_ba=0.458094, z=0.0290664), ResultRow(mangaid='1-135095', plate=8312, plateifu='8312-3702', ifu_name='3702', ra=247.245291144, dec=41.255253243, elpetro_absmag_g_r=1.44723129272461, elpetro_ba=0.658268, z=0.0332324), ResultRow(mangaid='1-135129', plate=8603, plateifu='8603-12705', ifu_name='12705', ra=247.280269588, dec=40.5910287121, elpetro_absmag_g_r=1.81981086730957, elpetro_ba=0.503666, z=0.0327969), ResultRow(mangaid='1-135133', plate=8603, plateifu='8603-12703', ifu_name='12703', ra=247.282646413, dec=40.6650474998, elpetro_absmag_g_r=1.36585807800293, elpetro_ba=0.627429, z=0.0299683), ResultRow(mangaid='1-135134', plate=8603, plateifu='8603-9101', ifu_name='9101', ra=247.225624269, dec=40.8666111706, elpetro_absmag_g_r=1.85215187072754, elpetro_ba=0.958519, z=0.030343), ResultRow(mangaid='1-135152', plate=8312, plateifu='8312-6103', ifu_name='6103', ra=246.887611078, dec=41.1385055016, elpetro_absmag_g_r=0.762582778930664, elpetro_ba=0.839506, z=0.0301811), ResultRow(mangaid='1-135157', plate=8603, plateifu='8603-3702', ifu_name='3702', ra=247.04131843, dec=40.6956030265, elpetro_absmag_g_r=1.68464851379395, elpetro_ba=0.518096, z=0.0323713), ResultRow(mangaid='1-135207', plate=8555, plateifu='8555-1902', ifu_name='1902', ra=246.323470587, dec=42.6942265737, elpetro_absmag_g_r=1.51096343994141, elpetro_ba=0.755948, z=0.031485), ResultRow(mangaid='1-135371', plate=8588, plateifu='8588-9101', ifu_name='9101', ra=250.156240419, dec=39.2216349362, elpetro_absmag_g_r=1.37564086914062, elpetro_ba=0.430169, z=0.0352359), ResultRow(mangaid='1-135372', plate=8588, plateifu='8588-6102', ifu_name='6102', ra=250.116709759, dec=39.3201174959, elpetro_absmag_g_r=1.68138885498047, elpetro_ba=0.789335, z=0.0300793), ResultRow(mangaid='1-135383', plate=8588, plateifu='8588-12705', ifu_name='12705', ra=250.312873125, dec=39.7523514003, elpetro_absmag_g_r=1.2461109161377, elpetro_ba=0.355884, z=0.0301398), ResultRow(mangaid='1-135468', plate=8550, plateifu='8550-12705', ifu_name='12705', ra=249.135695215, dec=39.0278800132, elpetro_absmag_g_r=1.37894058227539, elpetro_ba=0.670573, z=0.029986), ResultRow(mangaid='1-135502', plate=8604, plateifu='8604-12703', ifu_name='12703', ra=247.76417484, dec=39.838503868, elpetro_absmag_g_r=1.57090950012207, elpetro_ba=0.804992, z=0.0305383), ResultRow(mangaid='1-135503', plate=8604, plateifu='8604-3703', ifu_name='3703', ra=247.882111795, dec=39.8976507098, elpetro_absmag_g_r=1.6621150970459, elpetro_ba=0.914384, z=0.0296457), ResultRow(mangaid='1-135506', plate=8601, plateifu='8601-3704', ifu_name='3704', ra=247.948553785, dec=39.8142396526, elpetro_absmag_g_r=1.70755767822266, elpetro_ba=0.740217, z=0.0295479), ResultRow(mangaid='1-135512', plate=8601, plateifu='8601-6102', ifu_name='6102', ra=247.711831631, dec=40.0247994472, elpetro_absmag_g_r=0.778741836547852, elpetro_ba=0.783227, z=0.0279629), ResultRow(mangaid='1-135516', plate=8550, plateifu='8550-6104', ifu_name='6104', ra=248.41315, dec=39.25763, elpetro_absmag_g_r=1.33112716674805, elpetro_ba=0.41841, z=0.0314747), ResultRow(mangaid='1-135517', plate=8588, plateifu='8588-6101', ifu_name='6101', ra=248.456755755, dec=39.2632054313, elpetro_absmag_g_r=1.17428970336914, elpetro_ba=0.961436, z=0.0317611), ResultRow(mangaid='1-135530', plate=8550, plateifu='8550-9101', ifu_name='9101', ra=247.409672103, dec=40.2353879985, elpetro_absmag_g_r=1.7724609375, elpetro_ba=0.286038, z=0.0283296), ResultRow(mangaid='1-135545', plate=8601, plateifu='8601-6103', ifu_name='6103', ra=247.530374396, dec=40.8801572026, elpetro_absmag_g_r=1.43307685852051, elpetro_ba=0.402053, z=0.0301334), ResultRow(mangaid='1-135548', plate=8601, plateifu='8601-12702', ifu_name='12702', ra=247.591672626, dec=40.9242421985, elpetro_absmag_g_r=1.05030250549316, elpetro_ba=0.948442, z=0.030559), ResultRow(mangaid='1-135568', plate=8601, plateifu='8601-12701', ifu_name='12701', ra=247.718035556, dec=41.2861515449, elpetro_absmag_g_r=0.790615081787109, elpetro_ba=0.6425, z=0.0938565), ResultRow(mangaid='1-135641', plate=8588, plateifu='8588-12704', ifu_name='12704', ra=249.557305714, dec=40.1468209363, elpetro_absmag_g_r=1.44169998168945, elpetro_ba=0.377239, z=0.030363), ResultRow(mangaid='1-135657', plate=8588, plateifu='8588-1901', ifu_name='1901', ra=249.717085826, dec=40.1993481631, elpetro_absmag_g_r=1.22106170654297, elpetro_ba=0.772008, z=0.0364618), ResultRow(mangaid='1-135679', plate=8588, plateifu='8588-6103', ifu_name='6103', ra=250.349059361, dec=40.2187885261, elpetro_absmag_g_r=1.4596061706543, elpetro_ba=0.57416, z=0.0331057), ResultRow(mangaid='1-135794', plate=8588, plateifu='8588-1902', ifu_name='1902', ra=249.770169345, dec=39.2907848202, elpetro_absmag_g_r=1.6043529510498, elpetro_ba=0.617959, z=0.0304343), ResultRow(mangaid='1-135810', plate=8601, plateifu='8601-12705', ifu_name='12705', ra=250.12314401, dec=39.2351144868, elpetro_absmag_g_r=1.43718338012695, elpetro_ba=0.451484, z=0.0297241), ResultRow(mangaid='1-136120', plate=8606, plateifu='8606-3701', ifu_name='3701', ra=254.997419646, dec=36.0290774727, elpetro_absmag_g_r=1.36807250976562, elpetro_ba=0.780117, z=0.0573351), ResultRow(mangaid='1-136248', plate=8606, plateifu='8606-3702', ifu_name='3702', ra=253.793913226, dec=36.9063091542, elpetro_absmag_g_r=1.42204856872559, elpetro_ba=0.50548, z=0.0235624), ResultRow(mangaid='1-136268', plate=8606, plateifu='8606-6101', ifu_name='6101', ra=254.44755809, dec=37.6877060265, elpetro_absmag_g_r=1.20418357849121, elpetro_ba=0.498686, z=0.0416946), ResultRow(mangaid='1-136286', plate=8606, plateifu='8606-9102', ifu_name='9102', ra=255.709053426, dec=36.7067487022, elpetro_absmag_g_r=0.959020614624023, elpetro_ba=0.425402, z=0.0327918), ResultRow(mangaid='1-136304', plate=8606, plateifu='8606-1902', ifu_name='1902', ra=256.01730405, dec=36.4373676031, elpetro_absmag_g_r=1.11434555053711, elpetro_ba=0.488437, z=0.0236332), ResultRow(mangaid='1-136305', plate=8606, plateifu='8606-3704', ifu_name='3704', ra=255.915542507, dec=36.3849337159, elpetro_absmag_g_r=1.20375823974609, elpetro_ba=0.379571, z=0.0246675), ResultRow(mangaid='1-136306', plate=8606, plateifu='8606-12702', ifu_name='12702', ra=255.869931612, dec=36.4366645326, elpetro_absmag_g_r=1.50382232666016, elpetro_ba=0.873923, z=0.0231691), ResultRow(mangaid='1-137528', plate=8440, plateifu='8440-6103', ifu_name='6103', ra=134.40495469, dec=41.0439158135, elpetro_absmag_g_r=1.47062683105469, elpetro_ba=0.814622, z=0.0874946), ResultRow(mangaid='1-137714', plate=8247, plateifu='8247-3704', ifu_name='3704', ra=136.039205522, dec=42.3034211072, elpetro_absmag_g_r=1.52022552490234, elpetro_ba=0.529873, z=0.0265976), ResultRow(mangaid='1-137730', plate=8247, plateifu='8247-9101', ifu_name='9101', ra=136.778259104, dec=42.5951034895, elpetro_absmag_g_r=0.883840560913086, elpetro_ba=0.883013, z=0.0415657), ResultRow(mangaid='1-137795', plate=8247, plateifu='8247-12702', ifu_name='12702', ra=135.722564417, dec=43.2477264356, elpetro_absmag_g_r=0.863546371459961, elpetro_ba=0.696187, z=0.0436196), ResultRow(mangaid='1-137797', plate=8247, plateifu='8247-12703', ifu_name='12703', ra=136.363181204, dec=44.1438800822, elpetro_absmag_g_r=0.999143600463867, elpetro_ba=0.640129, z=0.0533346), ResultRow(mangaid='1-137799', plate=8247, plateifu='8247-3703', ifu_name='3703', ra=136.842484254, dec=43.275431327, elpetro_absmag_g_r=1.47824478149414, elpetro_ba=0.873061, z=0.0415027), ResultRow(mangaid='1-137801', plate=8249, plateifu='8249-3701', ifu_name='3701', ra=136.68645847, dec=44.2609809065, elpetro_absmag_g_r=1.58131790161133, elpetro_ba=0.89272, z=0.0490247), ResultRow(mangaid='1-137801', plate=8247, plateifu='8247-3702', ifu_name='3702', ra=136.68645847, dec=44.2609809065, elpetro_absmag_g_r=1.58131790161133, elpetro_ba=0.89272, z=0.0490247), ResultRow(mangaid='1-137844', plate=8250, plateifu='8250-9102', ifu_name='9102', ra=139.427012288, dec=44.1006868066, elpetro_absmag_g_r=1.75845336914062, elpetro_ba=0.749366, z=0.0323374), ResultRow(mangaid='1-137845', plate=8250, plateifu='8250-9101', ifu_name='9101', ra=139.308858804, dec=44.4891619278, elpetro_absmag_g_r=1.76508140563965, elpetro_ba=0.744394, z=0.0320271), ResultRow(mangaid='1-137845', plate=8249, plateifu='8249-6104', ifu_name='6104', ra=139.308858804, dec=44.4891619278, elpetro_absmag_g_r=1.76508140563965, elpetro_ba=0.744394, z=0.0320271), ResultRow(mangaid='1-137853', plate=8250, plateifu='8250-3702', ifu_name='3702', ra=138.935541667, dec=44.2360887374, elpetro_absmag_g_r=1.66989135742188, elpetro_ba=0.86344, z=0.0321364), ResultRow(mangaid='1-137853', plate=8249, plateifu='8249-12705', ifu_name='12705', ra=138.935541667, dec=44.2360887374, elpetro_absmag_g_r=1.66989135742188, elpetro_ba=0.86344, z=0.0321364), ResultRow(mangaid='1-137870', plate=8247, plateifu='8247-12704', ifu_name='12704', ra=136.730098431, dec=44.121516356, elpetro_absmag_g_r=0.911367416381836, elpetro_ba=0.883403, z=0.0494434), ResultRow(mangaid='1-137875', plate=8249, plateifu='8249-6102', ifu_name='6102', ra=137.335924379, dec=45.0655135856, elpetro_absmag_g_r=0.834562301635742, elpetro_ba=0.943022, z=0.0510126), ResultRow(mangaid='1-137883', plate=8249, plateifu='8249-3704', ifu_name='3704', ra=137.874763008, dec=45.4683204593, elpetro_absmag_g_r=1.44910621643066, elpetro_ba=0.802596, z=0.0268253), ResultRow(mangaid='1-137890', plate=8249, plateifu='8249-1901', ifu_name='1901', ra=137.219338724, dec=44.9322670576, elpetro_absmag_g_r=1.19685363769531, elpetro_ba=0.549448, z=0.0265684), ResultRow(mangaid='1-137898', plate=8249, plateifu='8249-12702', ifu_name='12702', ra=137.562412054, dec=44.6841342226, elpetro_absmag_g_r=0.960855484008789, elpetro_ba=0.379162, z=0.0346482), ResultRow(mangaid='1-137908', plate=8249, plateifu='8249-12703', ifu_name='12703', ra=139.55919103, dec=45.6516888989, elpetro_absmag_g_r=1.1392650604248, elpetro_ba=0.700622, z=0.0269041), ResultRow(mangaid='1-137912', plate=8250, plateifu='8250-12703', ifu_name='12703', ra=139.647743513, dec=44.5967370112, elpetro_absmag_g_r=0.227899551391602, elpetro_ba=0.779483, z=0.014213), ResultRow(mangaid='1-137915', plate=8249, plateifu='8249-1902', ifu_name='1902', ra=139.797122285, dec=45.3665231283, elpetro_absmag_g_r=1.49036026000977, elpetro_ba=0.961667, z=0.031543), ResultRow(mangaid='1-137961', plate=8249, plateifu='8249-3703', ifu_name='3703', ra=139.720468628, dec=45.7277823533, elpetro_absmag_g_r=1.23330879211426, elpetro_ba=0.895169, z=0.026438), ResultRow(mangaid='1-138021', plate=8252, plateifu='8252-12705', ifu_name='12705', ra=145.443221426, dec=46.9738383647, elpetro_absmag_g_r=1.2242603302002, elpetro_ba=0.853881, z=0.0255975), ResultRow(mangaid='1-138034', plate=8252, plateifu='8252-3701', ifu_name='3701', ra=144.846118089, dec=47.1268642387, elpetro_absmag_g_r=1.54246711730957, elpetro_ba=0.52102, z=0.027267), ResultRow(mangaid='1-138087', plate=8252, plateifu='8252-12701', ifu_name='12701', ra=144.23925577, dec=48.2941162265, elpetro_absmag_g_r=0.339872360229492, elpetro_ba=0.643171, z=0.0249804), ResultRow(mangaid='1-138102', plate=8252, plateifu='8252-6102', ifu_name='6102', ra=144.557956402, dec=48.3883017672, elpetro_absmag_g_r=1.02695465087891, elpetro_ba=0.739386, z=0.0257882), ResultRow(mangaid='1-138105', plate=8252, plateifu='8252-6101', ifu_name='6101', ra=144.617048762, dec=48.5255082955, elpetro_absmag_g_r=1.2266845703125, elpetro_ba=0.738417, z=0.0248735), ResultRow(mangaid='1-138106', plate=8252, plateifu='8252-3703', ifu_name='3703', ra=144.352308981, dec=48.5154530802, elpetro_absmag_g_r=1.10024833679199, elpetro_ba=0.806873, z=0.0243491), ResultRow(mangaid='1-138140', plate=8252, plateifu='8252-3704', ifu_name='3704', ra=145.308121958, dec=47.6885981864, elpetro_absmag_g_r=1.4588623046875, elpetro_ba=0.869377, z=0.0467992), ResultRow(mangaid='1-138157', plate=8252, plateifu='8252-9102', ifu_name='9102', ra=145.541530882, dec=48.0128634742, elpetro_absmag_g_r=0.744720458984375, elpetro_ba=0.630656, z=0.0561577), ResultRow(mangaid='1-138164', plate=8252, plateifu='8252-1902', ifu_name='1902', ra=146.091838441, dec=47.459850984, elpetro_absmag_g_r=1.3321475982666, elpetro_ba=0.917753, z=0.0258991), ResultRow(mangaid='1-147394', plate=8250, plateifu='8250-12705', ifu_name='12705', ra=140.39879069, dec=43.2572462761, elpetro_absmag_g_r=0.938104629516602, elpetro_ba=0.255031, z=0.0160493), ResultRow(mangaid='1-147475', plate=8453, plateifu='8453-12704', ifu_name='12704', ra=153.13479279, dec=46.6953613957, elpetro_absmag_g_r=0.833671569824219, elpetro_ba=0.885371, z=0.0381522), ResultRow(mangaid='1-147488', plate=8453, plateifu='8453-1902', ifu_name='1902', ra=153.21425096, dec=46.9128221111, elpetro_absmag_g_r=1.23658561706543, elpetro_ba=0.452716, z=0.0241526), ResultRow(mangaid='1-147496', plate=8453, plateifu='8453-6102', ifu_name='6102', ra=153.213639346, dec=47.2949237539, elpetro_absmag_g_r=0.782651901245117, elpetro_ba=0.763455, z=0.0395361), ResultRow(mangaid='1-147507', plate=8453, plateifu='8453-6101', ifu_name='6101', ra=152.773273523, dec=46.8995324281, elpetro_absmag_g_r=1.36958885192871, elpetro_ba=0.591953, z=0.0250793), ResultRow(mangaid='1-147514', plate=8453, plateifu='8453-12701', ifu_name='12701', ra=151.309949901, dec=46.6508890341, elpetro_absmag_g_r=1.22234153747559, elpetro_ba=0.814928, z=0.0251003), ResultRow(mangaid='1-147521', plate=8453, plateifu='8453-3702', ifu_name='3702', ra=152.545357653, dec=46.9522671141, elpetro_absmag_g_r=1.49322319030762, elpetro_ba=0.459684, z=0.0253024), ResultRow(mangaid='1-147522', plate=8453, plateifu='8453-9102', ifu_name='9102', ra=152.514716814, dec=47.1209306545, elpetro_absmag_g_r=1.25778961181641, elpetro_ba=0.923625, z=0.0653628), ResultRow(mangaid='1-147537', plate=8453, plateifu='8453-12702', ifu_name='12702', ra=151.547771122, dec=47.2950386608, elpetro_absmag_g_r=1.23126220703125, elpetro_ba=0.554456, z=0.0381068), ResultRow(mangaid='1-147602', plate=8453, plateifu='8453-6103', ifu_name='6103', ra=151.729558675, dec=47.9841111295, elpetro_absmag_g_r=1.75717353820801, elpetro_ba=0.932527, z=0.067855), ResultRow(mangaid='1-147649', plate=8453, plateifu='8453-9101', ifu_name='9101', ra=152.046182936, dec=47.5174726058, elpetro_absmag_g_r=1.00096130371094, elpetro_ba=0.351228, z=0.0384484), ResultRow(mangaid='1-147685', plate=8452, plateifu='8452-12702', ifu_name='12702', ra=156.044276918, dec=47.5239549356, elpetro_absmag_g_r=1.55574607849121, elpetro_ba=0.233187, z=0.0425735), ResultRow(mangaid='1-147787', plate=8453, plateifu='8453-6104', ifu_name='6104', ra=154.119427243, dec=47.3648162968, elpetro_absmag_g_r=1.19962120056152, elpetro_ba=0.395606, z=0.0403757), ResultRow(mangaid='1-147815', plate=8453, plateifu='8453-1901', ifu_name='1901', ra=153.365546207, dec=47.516235898, elpetro_absmag_g_r=1.38706016540527, elpetro_ba=0.598532, z=0.0253396), ResultRow(mangaid='1-147863', plate=8453, plateifu='8453-12703', ifu_name='12703', ra=153.685061429, dec=48.689638952, elpetro_absmag_g_r=1.26955413818359, elpetro_ba=0.527282, z=0.0632026), ResultRow(mangaid='1-148046', plate=8452, plateifu='8452-1902', ifu_name='1902', ra=157.77930272, dec=48.0148303874, elpetro_absmag_g_r=0.790740966796875, elpetro_ba=0.925427, z=0.058703), ResultRow(mangaid='1-148068', plate=8452, plateifu='8452-12703', ifu_name='12703', ra=156.805684986, dec=48.2447914261, elpetro_absmag_g_r=1.28773880004883, elpetro_ba=0.805928, z=0.0609631), ResultRow(mangaid='1-148127', plate=8452, plateifu='8452-3702', ifu_name='3702', ra=156.298016415, dec=47.7390794143, elpetro_absmag_g_r=1.83721733093262, elpetro_ba=0.850507, z=0.0621072), ResultRow(mangaid='1-155337', plate=8249, plateifu='8249-12701', ifu_name='12701', ra=136.156282887, dec=44.874731539, elpetro_absmag_g_r=1.0489559173584, elpetro_ba=0.426153, z=0.0345388), ResultRow(mangaid='1-155440', plate=8249, plateifu='8249-9101', ifu_name='9101', ra=136.476492743, dec=46.259107066, elpetro_absmag_g_r=1.50351715087891, elpetro_ba=0.678623, z=0.0518655), ResultRow(mangaid='1-155456', plate=8249, plateifu='8249-6103', ifu_name='6103', ra=136.793850517, dec=46.2111457117, elpetro_absmag_g_r=1.64328575134277, elpetro_ba=0.962315, z=0.040334), ResultRow(mangaid='1-155463', plate=8249, plateifu='8249-6101', ifu_name='6101', ra=137.562456488, dec=46.2932696556, elpetro_absmag_g_r=1.21530723571777, elpetro_ba=0.558139, z=0.026734), ResultRow(mangaid='1-155541', plate=8249, plateifu='8249-9102', ifu_name='9102', ra=138.37190266, dec=46.6142215927, elpetro_absmag_g_r=1.68683815002441, elpetro_ba=0.544517, z=0.0802487), ResultRow(mangaid='1-155558', plate=8249, plateifu='8249-3702', ifu_name='3702', ra=137.03265263, dec=45.9209619515, elpetro_absmag_g_r=1.2888126373291, elpetro_ba=0.589713, z=0.0267975), ResultRow(mangaid='1-155903', plate=8439, plateifu='8439-1901', ifu_name='1901', ra=141.190236455, dec=49.4448016737, elpetro_absmag_g_r=1.11646842956543, elpetro_ba=0.969302, z=0.0163661), ResultRow(mangaid='1-155926', plate=8439, plateifu='8439-12702', ifu_name='12702', ra=141.539307103, dec=49.3102016203, elpetro_absmag_g_r=1.5238151550293, elpetro_ba=0.796842, z=0.0269288), ResultRow(mangaid='1-155975', plate=8439, plateifu='8439-6102', ifu_name='6102', ra=142.778167545, dec=49.0797456578, elpetro_absmag_g_r=1.39241409301758, elpetro_ba=0.725726, z=0.0339319), ResultRow(mangaid='1-155978', plate=8439, plateifu='8439-12701', ifu_name='12701', ra=143.010196099, dec=48.551093077, elpetro_absmag_g_r=0.783824920654297, elpetro_ba=0.526699, z=0.0162666), ResultRow(mangaid='1-156011', plate=8252, plateifu='8252-3702', ifu_name='3702', ra=144.059863049, dec=48.7456976861, elpetro_absmag_g_r=1.76472663879395, elpetro_ba=0.842447, z=0.0905527), ResultRow(mangaid='1-156037', plate=8439, plateifu='8439-9102', ifu_name='9102', ra=143.754018642, dec=48.9767418599, elpetro_absmag_g_r=0.751018524169922, elpetro_ba=0.550243, z=0.0249582), ResultRow(mangaid='1-156061', plate=8439, plateifu='8439-1902', ifu_name='1902', ra=143.697034579, dec=48.7475756651, elpetro_absmag_g_r=1.58340835571289, elpetro_ba=0.859392, z=0.0259393), ResultRow(mangaid='1-156062', plate=8439, plateifu='8439-12705', ifu_name='12705', ra=143.288053477, dec=49.0503236816, elpetro_absmag_g_r=1.47800445556641, elpetro_ba=0.844666, z=0.0511487), ResultRow(mangaid='1-156074', plate=8439, plateifu='8439-6101', ifu_name='6101', ra=143.184618775, dec=48.7963482386, elpetro_absmag_g_r=0.928119659423828, elpetro_ba=0.571587, z=0.0263866), ResultRow(mangaid='1-156137', plate=8439, plateifu='8439-12704', ifu_name='12704', ra=144.031088241, dec=50.4392201284, elpetro_absmag_g_r=1.18128204345703, elpetro_ba=0.444969, z=0.0640375), ResultRow(mangaid='1-156154', plate=8439, plateifu='8439-9101', ifu_name='9101', ra=142.713904348, dec=50.3188614584, elpetro_absmag_g_r=0.718753814697266, elpetro_ba=0.439643, z=0.0379614), ResultRow(mangaid='1-166736', plate=8459, plateifu='8459-12702', ifu_name='12702', ra=147.585854164, dec=43.1455699673, elpetro_absmag_g_r=1.00078010559082, elpetro_ba=0.826408, z=0.0170809), ResultRow(mangaid='1-166738', plate=8459, plateifu='8459-12705', ifu_name='12705', ra=148.117076795, dec=42.8191413496, elpetro_absmag_g_r=0.993005752563477, elpetro_ba=0.917477, z=0.016087), ResultRow(mangaid='1-166739', plate=8459, plateifu='8459-12701', ifu_name='12701', ra=147.37898128, dec=42.1302903462, elpetro_absmag_g_r=1.65610313415527, elpetro_ba=0.772124, z=0.0718279), ResultRow(mangaid='1-166754', plate=8459, plateifu='8459-3704', ifu_name='3704', ra=147.32578151, dec=43.3517193284, elpetro_absmag_g_r=0.900096893310547, elpetro_ba=0.417579, z=0.0164167), ResultRow(mangaid='1-166889', plate=8459, plateifu='8459-9101', ifu_name='9101', ra=147.277688884, dec=44.0486811007, elpetro_absmag_g_r=0.859102249145508, elpetro_ba=0.525696, z=0.0156854), ResultRow(mangaid='1-166919', plate=8459, plateifu='8459-3702', ifu_name='3702', ra=146.709100143, dec=43.4238429596, elpetro_absmag_g_r=1.31706047058105, elpetro_ba=0.866956, z=0.0722105), ResultRow(mangaid='1-166930', plate=8459, plateifu='8459-6103', ifu_name='6103', ra=146.789027825, dec=43.4185743942, elpetro_absmag_g_r=1.5174388885498, elpetro_ba=0.550614, z=0.0720255), ResultRow(mangaid='1-166932', plate=8459, plateifu='8459-3701', ifu_name='3701', ra=146.785813609, dec=43.5104758987, elpetro_absmag_g_r=1.91670417785645, elpetro_ba=0.951709, z=0.0724488), ResultRow(mangaid='1-166947', plate=8459, plateifu='8459-3703', ifu_name='3703', ra=147.335, dec=43.44299, elpetro_absmag_g_r=1.58527755737305, elpetro_ba=0.921915, z=0.0719792), ResultRow(mangaid='1-166969', plate=8459, plateifu='8459-6102', ifu_name='6102', ra=147.990674372, dec=43.4140430617, elpetro_absmag_g_r=0.653806686401367, elpetro_ba=0.921856, z=0.0158773), ResultRow(mangaid='1-167013', plate=8459, plateifu='8459-9102', ifu_name='9102', ra=149.888880629, dec=43.6605000576, elpetro_absmag_g_r=0.906030654907227, elpetro_ba=0.779596, z=0.0170491), ResultRow(mangaid='1-167044', plate=8459, plateifu='8459-6104', ifu_name='6104', ra=149.346878642, dec=44.1547632349, elpetro_absmag_g_r=1.66348838806152, elpetro_ba=0.938967, z=0.0741969), ResultRow(mangaid='1-167067', plate=8459, plateifu='8459-1902', ifu_name='1902', ra=148.502535855, dec=43.0448001127, elpetro_absmag_g_r=1.06909561157227, elpetro_ba=0.766896, z=0.0169612), ResultRow(mangaid='1-167075', plate=8459, plateifu='8459-12704', ifu_name='12704', ra=147.604836276, dec=44.0406378719, elpetro_absmag_g_r=0.854578018188477, elpetro_ba=0.62063, z=0.0158584), ResultRow(mangaid='1-167079', plate=8459, plateifu='8459-1901', ifu_name='1901', ra=147.801793989, dec=44.0093089046, elpetro_absmag_g_r=1.34856986999512, elpetro_ba=0.777813, z=0.015711), ResultRow(mangaid='1-167080', plate=8459, plateifu='8459-6101', ifu_name='6101', ra=147.712302507, dec=44.0304545816, elpetro_absmag_g_r=1.1823673248291, elpetro_ba=0.809313, z=0.0463805), ResultRow(mangaid='1-167113', plate=8459, plateifu='8459-12703', ifu_name='12703', ra=148.84161359, dec=44.4405591163, elpetro_absmag_g_r=1.2220287322998, elpetro_ba=0.415025, z=0.0264594), ResultRow(mangaid='1-167380', plate=8453, plateifu='8453-3701', ifu_name='3701', ra=153.231920862, dec=46.4177099017, elpetro_absmag_g_r=1.58125877380371, elpetro_ba=0.478194, z=0.0382131), ResultRow(mangaid='1-167555', plate=8453, plateifu='8453-3703', ifu_name='3703', ra=153.752608461, dec=46.7567528969, elpetro_absmag_g_r=1.26830101013184, elpetro_ba=0.759023, z=0.0246439), ResultRow(mangaid='1-167564', plate=8453, plateifu='8453-12705', ifu_name='12705', ra=153.034483163, dec=46.2936923797, elpetro_absmag_g_r=1.14142608642578, elpetro_ba=0.399407, z=0.024247)] ###Markdown The Query Datamodel shows you every parameter that is available to search on. It groups parameters together into common types. ###Code qdm = q.datamodel qdm qdm.groups # look at all the available NSA parameters qdm.groups['nsa'].parameters ###Output _____no_output_____ ###Markdown What's New in Marvin 2.2!Lots of things are new in Marvin 2.2.0. See the list with links to individual sections here http://sdss-marvin.readthedocs.io/en/latest/whats-new.html Marvin now includes MPL-6 data ###Code %matplotlib inline from marvin import config config.switchSasUrl('local') config.forceDbOff() from marvin.tools.cube import Cube plateifu='8485-1901' cube = Cube(plateifu=plateifu) print(cube) maps = cube.getMaps(bintype='HYB10') print(maps) ###Output WARNING: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead <Marvin Cube (plateifu='8485-1901', mode='local', data_origin='file')> <Marvin Maps (plateifu='8485-1901', mode='local', data_origin='file', bintype='HYB10', template='GAU-MILESHC')> ###Markdown Smarter handling of inputsYou can still specify **plateifu**, **mangaid**, or **filename** but now Marvin will try to guess your input type if you do not specify an input keyword argument. ###Code from marvin.tools.maps import Maps maps = Maps(plateifu) # or a filename maps = Maps('/Users/Brian/Work/Manga/analysis/v2_3_1/2.1.3/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz') print(maps) ###Output <Marvin Maps (plateifu='8485-1901', mode='local', data_origin='file', bintype='SPX', template='GAU-MILESHC')> ###Markdown Fuzzy indexing and extractionMarvin now includes fuzzy lists and dictionaries in the Maps and Datamodels. This means Marvin will try to guess what you mean by what you type. For example, all of these methods grab the H-alpha flux map. ###Code # grab an H-alpha flux map ha = maps['emline_gflux_ha_6564'] # fuzzy name indexing ha = maps['gflux_ha'] # all map properties are available as class attributes. If using iPython, you can tab complete to see them all. ha = maps.emline_gflux_ha_6564 ###Output WARNING: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead ###Markdown New DRP, DAP and Query DatamodelsThere are new datamodels representing the MaNGA data for DRP, DAP and Query parameters. The datamodel is attached to every object you instantiate, or it can be accessed independently. For example, the **Maps** datamodel will list all the available map properties. See http://sdss-marvin.readthedocs.io/en/latest/datamodel/datamodels.html for details. ###Code # see the datamodel on maps maps.datamodel ###Output _____no_output_____ ###Markdown Each **Property** contains a name, a channel, the unit of the property, and a description ###Code haew_prop = maps.datamodel['emline_gew_ha'] haew_prop print(haew_prop.name, haew_prop.unit, haew_prop.description) ###Output emline_gew Angstrom Gaussian-fitted equivalent widths measurements (based on EMLINE_GFLUX) ###Markdown The fulll datamodel is available as a **parent** attribute or you can import it directly ###Code dapdm = maps.datamodel.parent print(dapdm) # get a list of all available DAP datamodels from marvin.utils.datamodel.dap import datamodel print(datamodel) # let's get the MPL-6 datamodel dapdm = datamodel['MPL-6'] print(dapdm) ###Output <DAPDataModel release='2.1.3', n_bintypes=5, n_templates=1, n_properties=292> [<DAPDataModel release='1.1.1', n_bintypes=3, n_templates=3, n_properties=92>, <DAPDataModel release='2.0.2', n_bintypes=4, n_templates=1, n_properties=151>, <DAPDataModel release='2.1.3', n_bintypes=5, n_templates=1, n_properties=292>] <DAPDataModel release='2.1.3', n_bintypes=5, n_templates=1, n_properties=292> ###Markdown Cubes, Maps, ModelCubes now utilize Quantity-based ObjectsMost Marvin Tools now use new objects to represent their data. **DataCubes** represent 3-d data, while a **Spectrum** represents a 1-d array of data. These sub-class from Astropy Quantities. This means now most properties have associated units. We also now track and propagate inverse variances and masks. ###Code # the cube datamodel shows the available datacubes cube.datamodel.datacubes # and spectra cube.datamodel.spectra ###Output _____no_output_____ ###Markdown The cube flux is now a **DataCube**, has proper units, has an ivar, mask, and wavelength attached to it ###Code print(type(cube.flux)) print('flux', cube.flux) print('mask', cube.flux.mask) print('wavelength', cube.flux.wavelength) ###Output <class 'marvin.tools.quantities.datacube.DataCube'> flux [[[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] ... [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]]] 1e-17 erg / (Angstrom cm2 s spaxel) mask [[[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] ... [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]] [[1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] ... [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027] [1027 1027 1027 ... 1027 1027 1027]]] wavelength [ 3621.59598486 3622.42998417 3623.26417553 ... 10349.03843826 10351.42166679 10353.80544415] Angstrom ###Markdown Slicing a **Datacube** in 2-d will return a new **DataCube**, while slicing in 3-d will return a **Spectrum** ###Code spec = cube.flux[:,17,17] print(type(spec)) print(spec) print(spec.unit) spec.plot() ###Output <class 'marvin.tools.quantities.spectrum.Spectrum'> [0.54676276 0.46566465 0.4622981 ... 0. 0. 0. ] 1e-17 erg / (Angstrom cm2 s spaxel) 1e-17 erg / (Angstrom cm2 s spaxel) ###Markdown MaskbitsThere is a new Maskbit class for improved maskbit handling. All objects now include new **Maskbit** versions of the DRP/DAP quality flag (**quality_flag**), targeting bits (**target_flags**), and pixel masks (**pixmask**). Now you can easily look up the labels for bits and create custom masks. See http://sdss-marvin.readthedocs.io/en/latest/utils/maskbit.html for details ###Code # H-alpha DAP quality flag ha.quality_flag ha.target_flags ha.pixmask # bits for mask value 1027 print('bits', ha.pixmask.values_to_bits(1027)) print('labels', ha.pixmask.values_to_labels(1027)) # convert the H-alpha mask into an list of labels ha.pixmask.labels ###Output _____no_output_____ ###Markdown Improved Query and Results HandlingThe handling of Queries and Results has been improved to provider better means of retrieving all the results of a query, extracting columns of parameters, and quickly plotting results. * See http://sdss-marvin.readthedocs.io/en/latest/query.html for Query handling* See http://sdss-marvin.readthedocs.io/en/latest/results.html for Results handling* See http://sdss-marvin.readthedocs.io/en/latest/datamodel/query_dm.html for how to use the Query Datamodel* See http://sdss-marvin.readthedocs.io/en/latest/utils/plot-scatter.html for quick scatter plotting* See http://sdss-marvin.readthedocs.io/en/latest/utils/plot-hist.html for quick histogram plotting ###Code from marvin.tools.query import Query config.setRelease('MPL-4') q = Query(search_filter='nsa.z < 0.1', return_params=['cube.ra', 'cube.dec', 'absmag_g_r', 'nsa.elpetro_ba']) r = q.run() # your results are now in Sets r.results # see the available columns r.columns # quickly plot the redshift vs g-r color output = r.plot('nsa.z', 'absmag_g_r') # or a histogram of the elpetro b/a axis ratio output=r.hist('elpetro_ba') # get all of the g-r colors as a list gr = r.getListOf('absmag_g_r', return_all=True) gr # the results currently only have 100 out of some total print(r.count, r.totalcount) # let's extend our result set by the next chunk of 100 r.extendSet() print(r.count, r.totalcount) print(r.results) ###Output 200 1282 <ResultSet(set=1.0/7, index=0:200, count_in_set=200, total=1282)> [ResultRow(mangaid='1-109394', plate=8082, plateifu='8082-9102', ifu_name='9102', ra=50.179936141, dec=-1.0022917898, elpetro_absmag_g_r=1.26038932800293, elpetro_ba=0.42712, z=0.0361073), ResultRow(mangaid='1-113208', plate=8618, plateifu='8618-3701', ifu_name='3701', ra=317.504479435, dec=9.86822191739, elpetro_absmag_g_r=1.48788070678711, elpetro_ba=0.752286, z=0.0699044), ResultRow(mangaid='1-113219', plate=7815, plateifu='7815-9102', ifu_name='9102', ra=317.374745914, dec=10.0519434342, elpetro_absmag_g_r=0.543312072753906, elpetro_ba=0.517058, z=0.0408897), ResultRow(mangaid='1-113375', plate=7815, plateifu='7815-9101', ifu_name='9101', ra=316.639658795, dec=10.7512221884, elpetro_absmag_g_r=0.757579803466797, elpetro_ba=0.570455, z=0.028215), ResultRow(mangaid='1-113379', plate=7815, plateifu='7815-6101', ifu_name='6101', ra=316.541566803, dec=10.3454195236, elpetro_absmag_g_r=1.09770011901855, elpetro_ba=0.373641, z=0.0171611), ResultRow(mangaid='1-113403', plate=7815, plateifu='7815-12703', ifu_name='12703', ra=316.964281103, dec=11.2623177305, elpetro_absmag_g_r=0.745466232299805, elpetro_ba=0.823788, z=0.0715126), ResultRow(mangaid='1-113418', plate=7815, plateifu='7815-12704', ifu_name='12704', ra=319.353761201, dec=10.2316206875, elpetro_absmag_g_r=1.44098854064941, elpetro_ba=0.456991, z=0.0430806), ResultRow(mangaid='1-113469', plate=7815, plateifu='7815-12702', ifu_name='12702', ra=317.943526819, dec=9.27749462963, elpetro_absmag_g_r=0.847789764404297, elpetro_ba=0.522312, z=0.0394617), ResultRow(mangaid='1-113520', plate=7815, plateifu='7815-1901', ifu_name='1901', ra=317.502202242, dec=11.5106477077, elpetro_absmag_g_r=1.7510347366333, elpetro_ba=0.751988, z=0.0167652), ResultRow(mangaid='1-113525', plate=8618, plateifu='8618-6103', ifu_name='6103', ra=317.430068351, dec=11.3552406345, elpetro_absmag_g_r=1.57906627655029, elpetro_ba=0.78557, z=0.0169457), ResultRow(mangaid='1-113525', plate=7815, plateifu='7815-1902', ifu_name='1902', ra=317.430068351, dec=11.3552406345, elpetro_absmag_g_r=1.57906627655029, elpetro_ba=0.78557, z=0.0169457), ResultRow(mangaid='1-113539', plate=8618, plateifu='8618-12701', ifu_name='12701', ra=317.979595193, dec=11.3794496273, elpetro_absmag_g_r=1.26716613769531, elpetro_ba=0.31432, z=0.0177002), ResultRow(mangaid='1-113540', plate=7815, plateifu='7815-3702', ifu_name='3702', ra=317.903201533, dec=11.4969433994, elpetro_absmag_g_r=0.952407836914062, elpetro_ba=0.889156, z=0.0293823), ResultRow(mangaid='1-113567', plate=8618, plateifu='8618-1902', ifu_name='1902', ra=318.026426419, dec=11.3451572409, elpetro_absmag_g_r=1.41732978820801, elpetro_ba=0.515994, z=0.0167432), ResultRow(mangaid='1-113567', plate=7815, plateifu='7815-12701', ifu_name='12701', ra=318.026426419, dec=11.3451572409, elpetro_absmag_g_r=1.41732978820801, elpetro_ba=0.515994, z=0.0167432), ResultRow(mangaid='1-113585', plate=7815, plateifu='7815-3703', ifu_name='3703', ra=319.11342841, dec=10.7676202056, elpetro_absmag_g_r=1.68158912658691, elpetro_ba=0.773512, z=0.070276), ResultRow(mangaid='1-113587', plate=8618, plateifu='8618-12704', ifu_name='12704', ra=319.273361936, dec=11.1201347053, elpetro_absmag_g_r=1.02355575561523, elpetro_ba=0.858524, z=0.0704926), ResultRow(mangaid='1-113647', plate=8618, plateifu='8618-6104', ifu_name='6104', ra=319.814830226, dec=10.070628454, elpetro_absmag_g_r=1.78754997253418, elpetro_ba=0.850177, z=0.0738563), ResultRow(mangaid='1-113651', plate=7815, plateifu='7815-3704', ifu_name='3704', ra=319.233949063, dec=9.63757525774, elpetro_absmag_g_r=1.4986743927002, elpetro_ba=0.941069, z=0.0708847), ResultRow(mangaid='1-113654', plate=8618, plateifu='8618-9102', ifu_name='9102', ra=319.271463809, dec=9.9723035679, elpetro_absmag_g_r=1.10831832885742, elpetro_ba=0.451358, z=0.0430694), ResultRow(mangaid='1-113663', plate=8618, plateifu='8618-3703', ifu_name='3703', ra=318.804558778, dec=9.91312455151, elpetro_absmag_g_r=2.80322933197021, elpetro_ba=0.502782, z=0.0316328), ResultRow(mangaid='1-113672', plate=8618, plateifu='8618-3704', ifu_name='3704', ra=318.862286217, dec=9.75781705378, elpetro_absmag_g_r=1.25676536560059, elpetro_ba=0.984299, z=0.0702278), ResultRow(mangaid='1-113698', plate=8618, plateifu='8618-1901', ifu_name='1901', ra=319.194045241, dec=11.5400106533, elpetro_absmag_g_r=0.995195388793945, elpetro_ba=0.567433, z=0.0167445), ResultRow(mangaid='1-113700', plate=8618, plateifu='8618-12703', ifu_name='12703', ra=319.451824118, dec=11.6605961542, elpetro_absmag_g_r=0.61408805847168, elpetro_ba=0.751346, z=0.0378372), ResultRow(mangaid='1-113712', plate=7815, plateifu='7815-6104', ifu_name='6104', ra=319.193098655, dec=11.0437407875, elpetro_absmag_g_r=0.69244384765625, elpetro_ba=0.942534, z=0.0806967), ResultRow(mangaid='1-114073', plate=7975, plateifu='7975-12705', ifu_name='12705', ra=324.895915071, dec=11.2049630634, elpetro_absmag_g_r=0.751516342163086, elpetro_ba=0.775431, z=0.0402895), ResultRow(mangaid='1-114082', plate=7975, plateifu='7975-3701', ifu_name='3701', ra=324.152525127, dec=10.5067325085, elpetro_absmag_g_r=1.44381332397461, elpetro_ba=0.425806, z=0.0402683), ResultRow(mangaid='1-114121', plate=7975, plateifu='7975-12701', ifu_name='12701', ra=323.466394588, dec=10.0718531123, elpetro_absmag_g_r=1.43171119689941, elpetro_ba=0.520187, z=0.0879313), ResultRow(mangaid='1-114128', plate=7975, plateifu='7975-6101', ifu_name='6101', ra=323.470604621, dec=10.4397349551, elpetro_absmag_g_r=1.86342239379883, elpetro_ba=0.864153, z=0.077875), ResultRow(mangaid='1-114129', plate=7975, plateifu='7975-12702', ifu_name='12702', ra=323.521211519, dec=10.4218555682, elpetro_absmag_g_r=2.19032287597656, elpetro_ba=0.521832, z=0.0774097), ResultRow(mangaid='1-114145', plate=7975, plateifu='7975-6102', ifu_name='6102', ra=323.577092837, dec=11.2143239831, elpetro_absmag_g_r=1.41496467590332, elpetro_ba=0.655866, z=0.0341885), ResultRow(mangaid='1-114171', plate=7975, plateifu='7975-3702', ifu_name='3702', ra=323.296326308, dec=10.6442039273, elpetro_absmag_g_r=1.70641708374023, elpetro_ba=0.849777, z=0.0881405), ResultRow(mangaid='1-114303', plate=7975, plateifu='7975-1901', ifu_name='1901', ra=323.65768, dec=11.42181, elpetro_absmag_g_r=0.658689498901367, elpetro_ba=0.505907, z=0.0220107), ResultRow(mangaid='1-114306', plate=7975, plateifu='7975-9101', ifu_name='9101', ra=323.742750886, dec=11.296528361, elpetro_absmag_g_r=0.99525260925293, elpetro_ba=0.811891, z=0.0636505), ResultRow(mangaid='1-114325', plate=7975, plateifu='7975-12703', ifu_name='12703', ra=324.094963475, dec=12.2363038289, elpetro_absmag_g_r=1.34337997436523, elpetro_ba=0.244175, z=0.0288791), ResultRow(mangaid='1-114334', plate=7975, plateifu='7975-1902', ifu_name='1902', ra=324.259707865, dec=11.9062032693, elpetro_absmag_g_r=1.43183898925781, elpetro_ba=0.56156, z=0.0222473), ResultRow(mangaid='1-114454', plate=7975, plateifu='7975-12704', ifu_name='12704', ra=324.586417578, dec=11.3486728499, elpetro_absmag_g_r=1.29723358154297, elpetro_ba=0.591206, z=0.0888606), ResultRow(mangaid='1-114465', plate=7975, plateifu='7975-6104', ifu_name='6104', ra=324.89155826, dec=10.4834807378, elpetro_absmag_g_r=1.21394157409668, elpetro_ba=0.867381, z=0.0788547), ResultRow(mangaid='1-114500', plate=7975, plateifu='7975-9102', ifu_name='9102', ra=324.548678082, dec=12.1942577854, elpetro_absmag_g_r=1.14164924621582, elpetro_ba=0.355321, z=0.0220849), ResultRow(mangaid='1-114502', plate=7975, plateifu='7975-6103', ifu_name='6103', ra=324.799320383, dec=11.9393222318, elpetro_absmag_g_r=1.4673023223877, elpetro_ba=0.960909, z=0.0798058), ResultRow(mangaid='1-114532', plate=7975, plateifu='7975-3703', ifu_name='3703', ra=325.161350811, dec=11.7227434323, elpetro_absmag_g_r=1.73165702819824, elpetro_ba=0.920698, z=0.0902261), ResultRow(mangaid='1-114928', plate=7977, plateifu='7977-3702', ifu_name='3702', ra=331.080925269, dec=12.9683778244, elpetro_absmag_g_r=1.65719413757324, elpetro_ba=0.680598, z=0.0273478), ResultRow(mangaid='1-114955', plate=7977, plateifu='7977-12701', ifu_name='12701', ra=332.602089837, dec=11.7130772993, elpetro_absmag_g_r=1.01249313354492, elpetro_ba=0.742333, z=0.0922799), ResultRow(mangaid='1-114956', plate=7977, plateifu='7977-3704', ifu_name='3704', ra=332.798726703, dec=11.8007324019, elpetro_absmag_g_r=1.3456974029541, elpetro_ba=0.756417, z=0.0270248), ResultRow(mangaid='1-114980', plate=7977, plateifu='7977-9102', ifu_name='9102', ra=332.83066426, dec=12.1847175842, elpetro_absmag_g_r=1.14808464050293, elpetro_ba=0.656607, z=0.0630915), ResultRow(mangaid='1-114998', plate=7977, plateifu='7977-6102', ifu_name='6102', ra=332.756351306, dec=12.3743026872, elpetro_absmag_g_r=2.77035713195801, elpetro_ba=0.6304, z=0.0614042), ResultRow(mangaid='1-115062', plate=7977, plateifu='7977-1901', ifu_name='1901', ra=330.855372733, dec=12.6758983985, elpetro_absmag_g_r=1.65952682495117, elpetro_ba=0.865932, z=0.0260569), ResultRow(mangaid='1-115085', plate=7977, plateifu='7977-6103', ifu_name='6103', ra=331.802634213, dec=13.2660525434, elpetro_absmag_g_r=0.912630081176758, elpetro_ba=0.472784, z=0.0349304), ResultRow(mangaid='1-115097', plate=7977, plateifu='7977-3701', ifu_name='3701', ra=332.203447059, dec=13.3647373417, elpetro_absmag_g_r=1.49947357177734, elpetro_ba=0.528689, z=0.0274473), ResultRow(mangaid='1-115128', plate=7977, plateifu='7977-1902', ifu_name='1902', ra=332.481316937, dec=12.8180504327, elpetro_absmag_g_r=1.1044979095459, elpetro_ba=0.49669, z=0.0358116), ResultRow(mangaid='1-115162', plate=7977, plateifu='7977-12703', ifu_name='12703', ra=333.201842347, dec=13.334120927, elpetro_absmag_g_r=1.13131713867188, elpetro_ba=0.479943, z=0.0738627), ResultRow(mangaid='1-115320', plate=7977, plateifu='7977-3703', ifu_name='3703', ra=333.052045245, dec=12.205190661, elpetro_absmag_g_r=0.99519157409668, elpetro_ba=0.842721, z=0.0275274), ResultRow(mangaid='1-124604', plate=8439, plateifu='8439-6103', ifu_name='6103', ra=141.34417921, dec=50.5536812778, elpetro_absmag_g_r=1.38611221313477, elpetro_ba=0.345553, z=0.0253001), ResultRow(mangaid='1-133922', plate=8486, plateifu='8486-6104', ifu_name='6104', ra=239.195689664, dec=47.9955208307, elpetro_absmag_g_r=1.51949119567871, elpetro_ba=0.390132, z=0.0174718), ResultRow(mangaid='1-133941', plate=8486, plateifu='8486-9102', ifu_name='9102', ra=239.030589848, dec=48.0308761201, elpetro_absmag_g_r=1.04214859008789, elpetro_ba=0.740501, z=0.0189045), ResultRow(mangaid='1-133945', plate=8486, plateifu='8486-3703', ifu_name='3703', ra=238.881357667, dec=47.677310104, elpetro_absmag_g_r=1.70501899719238, elpetro_ba=0.75216, z=0.0183248), ResultRow(mangaid='1-133948', plate=8486, plateifu='8486-6103', ifu_name='6103', ra=238.891298957, dec=48.0223923799, elpetro_absmag_g_r=1.62374401092529, elpetro_ba=0.662078, z=0.0195194), ResultRow(mangaid='1-133976', plate=8486, plateifu='8486-9101', ifu_name='9101', ra=238.718472619, dec=47.8808922742, elpetro_absmag_g_r=1.26091766357422, elpetro_ba=0.627185, z=0.0182938), ResultRow(mangaid='1-133987', plate=8486, plateifu='8486-1902', ifu_name='1902', ra=239.334163047, dec=48.2072621316, elpetro_absmag_g_r=1.73217391967773, elpetro_ba=0.902851, z=0.0195435), ResultRow(mangaid='1-134004', plate=8486, plateifu='8486-1901', ifu_name='1901', ra=238.448582292, dec=47.4049584412, elpetro_absmag_g_r=1.27153015136719, elpetro_ba=0.667273, z=0.0185601), ResultRow(mangaid='1-134020', plate=8486, plateifu='8486-6102', ifu_name='6102', ra=238.046893627, dec=48.0439162921, elpetro_absmag_g_r=1.4318904876709, elpetro_ba=0.452976, z=0.0193267), ResultRow(mangaid='1-134209', plate=8549, plateifu='8549-9101', ifu_name='9101', ra=242.276471895, dec=46.6712048189, elpetro_absmag_g_r=1.46211814880371, elpetro_ba=0.938842, z=0.0545042), ResultRow(mangaid='1-134239', plate=8549, plateifu='8549-3703', ifu_name='3703', ra=241.416442386, dec=46.8465606897, elpetro_absmag_g_r=1.20720481872559, elpetro_ba=0.840219, z=0.0571086), ResultRow(mangaid='1-134248', plate=8549, plateifu='8549-3702', ifu_name='3702', ra=241.005278975, dec=46.8029102028, elpetro_absmag_g_r=1.04830741882324, elpetro_ba=0.603141, z=0.0212204), ResultRow(mangaid='1-134293', plate=8549, plateifu='8549-6103', ifu_name='6103', ra=240.418740846, dec=46.085291751, elpetro_absmag_g_r=0.724908828735352, elpetro_ba=0.685683, z=0.0416784), ResultRow(mangaid='1-134503', plate=8555, plateifu='8555-1901', ifu_name='1901', ra=243.873718478, dec=44.2912632693, elpetro_absmag_g_r=1.38505744934082, elpetro_ba=0.580866, z=0.0371472), ResultRow(mangaid='1-134562', plate=8549, plateifu='8549-1902', ifu_name='1902', ra=242.727439731, dec=44.985695801, elpetro_absmag_g_r=0.999540328979492, elpetro_ba=0.709542, z=0.0355137), ResultRow(mangaid='1-134597', plate=8549, plateifu='8549-12705', ifu_name='12705', ra=241.907223711, dec=45.0653702307, elpetro_absmag_g_r=1.32281875610352, elpetro_ba=0.493211, z=0.0441938), ResultRow(mangaid='1-134599', plate=8549, plateifu='8549-12704', ifu_name='12704', ra=242.978644743, dec=46.1277269855, elpetro_absmag_g_r=1.2156925201416, elpetro_ba=0.347987, z=0.019658), ResultRow(mangaid='1-134614', plate=8549, plateifu='8549-6102', ifu_name='6102', ra=243.009178672, dec=45.7750314981, elpetro_absmag_g_r=1.25503730773926, elpetro_ba=0.409631, z=0.0528277), ResultRow(mangaid='1-134634', plate=8549, plateifu='8549-3704', ifu_name='3704', ra=243.18537291, dec=45.3520102657, elpetro_absmag_g_r=1.71317291259766, elpetro_ba=0.601301, z=0.0523251), ResultRow(mangaid='1-134848', plate=8555, plateifu='8555-12703', ifu_name='12703', ra=244.331994382, dec=43.4796723691, elpetro_absmag_g_r=1.4580078125, elpetro_ba=0.276868, z=0.0584495), ResultRow(mangaid='1-134924', plate=8555, plateifu='8555-9101', ifu_name='9101', ra=245.662015493, dec=43.4646577078, elpetro_absmag_g_r=1.76020240783691, elpetro_ba=0.819258, z=0.0319997), ResultRow(mangaid='1-134954', plate=8555, plateifu='8555-12705', ifu_name='12705', ra=246.578190983, dec=43.4074643202, elpetro_absmag_g_r=1.38137054443359, elpetro_ba=0.692219, z=0.0315232), ResultRow(mangaid='1-134964', plate=8555, plateifu='8555-3701', ifu_name='3701', ra=246.760690284, dec=43.4760996734, elpetro_absmag_g_r=1.5971508026123, elpetro_ba=0.853938, z=0.0462348), ResultRow(mangaid='1-135030', plate=8603, plateifu='8603-12704', ifu_name='12704', ra=247.893876589, dec=40.5655973228, elpetro_absmag_g_r=1.31695175170898, elpetro_ba=0.700621, z=0.0273289), ResultRow(mangaid='1-135054', plate=8550, plateifu='8550-12703', ifu_name='12703', ra=247.674430234, dec=40.5293893805, elpetro_absmag_g_r=1.34156799316406, elpetro_ba=0.853565, z=0.0298122), ResultRow(mangaid='1-135055', plate=8601, plateifu='8601-6104', ifu_name='6104', ra=247.641287575, dec=40.5394009252, elpetro_absmag_g_r=1.68307113647461, elpetro_ba=0.808577, z=0.0300581), ResultRow(mangaid='1-135057', plate=8601, plateifu='8601-12703', ifu_name='12703', ra=247.57407, dec=40.59861, elpetro_absmag_g_r=0.928314208984375, elpetro_ba=0.834526, z=0.0288518), ResultRow(mangaid='1-135058', plate=8603, plateifu='8603-6103', ifu_name='6103', ra=247.800367796, dec=40.4218744432, elpetro_absmag_g_r=1.1861629486084, elpetro_ba=0.392703, z=0.0270087), ResultRow(mangaid='1-135077', plate=8312, plateifu='8312-6104', ifu_name='6104', ra=247.638466864, dec=41.4385861863, elpetro_absmag_g_r=1.33458137512207, elpetro_ba=0.458094, z=0.0290664), ResultRow(mangaid='1-135095', plate=8312, plateifu='8312-3702', ifu_name='3702', ra=247.245291144, dec=41.255253243, elpetro_absmag_g_r=1.44723129272461, elpetro_ba=0.658268, z=0.0332324), ResultRow(mangaid='1-135129', plate=8603, plateifu='8603-12705', ifu_name='12705', ra=247.280269588, dec=40.5910287121, elpetro_absmag_g_r=1.81981086730957, elpetro_ba=0.503666, z=0.0327969), ResultRow(mangaid='1-135133', plate=8603, plateifu='8603-12703', ifu_name='12703', ra=247.282646413, dec=40.6650474998, elpetro_absmag_g_r=1.36585807800293, elpetro_ba=0.627429, z=0.0299683), ResultRow(mangaid='1-135134', plate=8603, plateifu='8603-9101', ifu_name='9101', ra=247.225624269, dec=40.8666111706, elpetro_absmag_g_r=1.85215187072754, elpetro_ba=0.958519, z=0.030343), ResultRow(mangaid='1-135152', plate=8312, plateifu='8312-6103', ifu_name='6103', ra=246.887611078, dec=41.1385055016, elpetro_absmag_g_r=0.762582778930664, elpetro_ba=0.839506, z=0.0301811), ResultRow(mangaid='1-135157', plate=8603, plateifu='8603-3702', ifu_name='3702', ra=247.04131843, dec=40.6956030265, elpetro_absmag_g_r=1.68464851379395, elpetro_ba=0.518096, z=0.0323713), ResultRow(mangaid='1-135207', plate=8555, plateifu='8555-1902', ifu_name='1902', ra=246.323470587, dec=42.6942265737, elpetro_absmag_g_r=1.51096343994141, elpetro_ba=0.755948, z=0.031485), ResultRow(mangaid='1-135371', plate=8588, plateifu='8588-9101', ifu_name='9101', ra=250.156240419, dec=39.2216349362, elpetro_absmag_g_r=1.37564086914062, elpetro_ba=0.430169, z=0.0352359), ResultRow(mangaid='1-135372', plate=8588, plateifu='8588-6102', ifu_name='6102', ra=250.116709759, dec=39.3201174959, elpetro_absmag_g_r=1.68138885498047, elpetro_ba=0.789335, z=0.0300793), ResultRow(mangaid='1-135383', plate=8588, plateifu='8588-12705', ifu_name='12705', ra=250.312873125, dec=39.7523514003, elpetro_absmag_g_r=1.2461109161377, elpetro_ba=0.355884, z=0.0301398), ResultRow(mangaid='1-135468', plate=8550, plateifu='8550-12705', ifu_name='12705', ra=249.135695215, dec=39.0278800132, elpetro_absmag_g_r=1.37894058227539, elpetro_ba=0.670573, z=0.029986), ResultRow(mangaid='1-135502', plate=8604, plateifu='8604-12703', ifu_name='12703', ra=247.76417484, dec=39.838503868, elpetro_absmag_g_r=1.57090950012207, elpetro_ba=0.804992, z=0.0305383), ResultRow(mangaid='1-135503', plate=8604, plateifu='8604-3703', ifu_name='3703', ra=247.882111795, dec=39.8976507098, elpetro_absmag_g_r=1.6621150970459, elpetro_ba=0.914384, z=0.0296457), ResultRow(mangaid='1-135506', plate=8601, plateifu='8601-3704', ifu_name='3704', ra=247.948553785, dec=39.8142396526, elpetro_absmag_g_r=1.70755767822266, elpetro_ba=0.740217, z=0.0295479), ResultRow(mangaid='1-135512', plate=8601, plateifu='8601-6102', ifu_name='6102', ra=247.711831631, dec=40.0247994472, elpetro_absmag_g_r=0.778741836547852, elpetro_ba=0.783227, z=0.0279629), ResultRow(mangaid='1-135516', plate=8550, plateifu='8550-6104', ifu_name='6104', ra=248.41315, dec=39.25763, elpetro_absmag_g_r=1.33112716674805, elpetro_ba=0.41841, z=0.0314747), ResultRow(mangaid='1-135517', plate=8588, plateifu='8588-6101', ifu_name='6101', ra=248.456755755, dec=39.2632054313, elpetro_absmag_g_r=1.17428970336914, elpetro_ba=0.961436, z=0.0317611), ResultRow(mangaid='1-135530', plate=8550, plateifu='8550-9101', ifu_name='9101', ra=247.409672103, dec=40.2353879985, elpetro_absmag_g_r=1.7724609375, elpetro_ba=0.286038, z=0.0283296), ResultRow(mangaid='1-135545', plate=8601, plateifu='8601-6103', ifu_name='6103', ra=247.530374396, dec=40.8801572026, elpetro_absmag_g_r=1.43307685852051, elpetro_ba=0.402053, z=0.0301334), ResultRow(mangaid='1-135548', plate=8601, plateifu='8601-12702', ifu_name='12702', ra=247.591672626, dec=40.9242421985, elpetro_absmag_g_r=1.05030250549316, elpetro_ba=0.948442, z=0.030559), ResultRow(mangaid='1-135568', plate=8601, plateifu='8601-12701', ifu_name='12701', ra=247.718035556, dec=41.2861515449, elpetro_absmag_g_r=0.790615081787109, elpetro_ba=0.6425, z=0.0938565), ResultRow(mangaid='1-135641', plate=8588, plateifu='8588-12704', ifu_name='12704', ra=249.557305714, dec=40.1468209363, elpetro_absmag_g_r=1.44169998168945, elpetro_ba=0.377239, z=0.030363), ResultRow(mangaid='1-135657', plate=8588, plateifu='8588-1901', ifu_name='1901', ra=249.717085826, dec=40.1993481631, elpetro_absmag_g_r=1.22106170654297, elpetro_ba=0.772008, z=0.0364618), ResultRow(mangaid='1-135679', plate=8588, plateifu='8588-6103', ifu_name='6103', ra=250.349059361, dec=40.2187885261, elpetro_absmag_g_r=1.4596061706543, elpetro_ba=0.57416, z=0.0331057), ResultRow(mangaid='1-135794', plate=8588, plateifu='8588-1902', ifu_name='1902', ra=249.770169345, dec=39.2907848202, elpetro_absmag_g_r=1.6043529510498, elpetro_ba=0.617959, z=0.0304343), ResultRow(mangaid='1-135810', plate=8601, plateifu='8601-12705', ifu_name='12705', ra=250.12314401, dec=39.2351144868, elpetro_absmag_g_r=1.43718338012695, elpetro_ba=0.451484, z=0.0297241), ResultRow(mangaid='1-136120', plate=8606, plateifu='8606-3701', ifu_name='3701', ra=254.997419646, dec=36.0290774727, elpetro_absmag_g_r=1.36807250976562, elpetro_ba=0.780117, z=0.0573351), ResultRow(mangaid='1-136248', plate=8606, plateifu='8606-3702', ifu_name='3702', ra=253.793913226, dec=36.9063091542, elpetro_absmag_g_r=1.42204856872559, elpetro_ba=0.50548, z=0.0235624), ResultRow(mangaid='1-136268', plate=8606, plateifu='8606-6101', ifu_name='6101', ra=254.44755809, dec=37.6877060265, elpetro_absmag_g_r=1.20418357849121, elpetro_ba=0.498686, z=0.0416946), ResultRow(mangaid='1-136286', plate=8606, plateifu='8606-9102', ifu_name='9102', ra=255.709053426, dec=36.7067487022, elpetro_absmag_g_r=0.959020614624023, elpetro_ba=0.425402, z=0.0327918), ResultRow(mangaid='1-136304', plate=8606, plateifu='8606-1902', ifu_name='1902', ra=256.01730405, dec=36.4373676031, elpetro_absmag_g_r=1.11434555053711, elpetro_ba=0.488437, z=0.0236332), ResultRow(mangaid='1-136305', plate=8606, plateifu='8606-3704', ifu_name='3704', ra=255.915542507, dec=36.3849337159, elpetro_absmag_g_r=1.20375823974609, elpetro_ba=0.379571, z=0.0246675), ResultRow(mangaid='1-136306', plate=8606, plateifu='8606-12702', ifu_name='12702', ra=255.869931612, dec=36.4366645326, elpetro_absmag_g_r=1.50382232666016, elpetro_ba=0.873923, z=0.0231691), ResultRow(mangaid='1-137528', plate=8440, plateifu='8440-6103', ifu_name='6103', ra=134.40495469, dec=41.0439158135, elpetro_absmag_g_r=1.47062683105469, elpetro_ba=0.814622, z=0.0874946), ResultRow(mangaid='1-137714', plate=8247, plateifu='8247-3704', ifu_name='3704', ra=136.039205522, dec=42.3034211072, elpetro_absmag_g_r=1.52022552490234, elpetro_ba=0.529873, z=0.0265976), ResultRow(mangaid='1-137730', plate=8247, plateifu='8247-9101', ifu_name='9101', ra=136.778259104, dec=42.5951034895, elpetro_absmag_g_r=0.883840560913086, elpetro_ba=0.883013, z=0.0415657), ResultRow(mangaid='1-137795', plate=8247, plateifu='8247-12702', ifu_name='12702', ra=135.722564417, dec=43.2477264356, elpetro_absmag_g_r=0.863546371459961, elpetro_ba=0.696187, z=0.0436196), ResultRow(mangaid='1-137797', plate=8247, plateifu='8247-12703', ifu_name='12703', ra=136.363181204, dec=44.1438800822, elpetro_absmag_g_r=0.999143600463867, elpetro_ba=0.640129, z=0.0533346), ResultRow(mangaid='1-137799', plate=8247, plateifu='8247-3703', ifu_name='3703', ra=136.842484254, dec=43.275431327, elpetro_absmag_g_r=1.47824478149414, elpetro_ba=0.873061, z=0.0415027), ResultRow(mangaid='1-137801', plate=8249, plateifu='8249-3701', ifu_name='3701', ra=136.68645847, dec=44.2609809065, elpetro_absmag_g_r=1.58131790161133, elpetro_ba=0.89272, z=0.0490247), ResultRow(mangaid='1-137801', plate=8247, plateifu='8247-3702', ifu_name='3702', ra=136.68645847, dec=44.2609809065, elpetro_absmag_g_r=1.58131790161133, elpetro_ba=0.89272, z=0.0490247), ResultRow(mangaid='1-137844', plate=8250, plateifu='8250-9102', ifu_name='9102', ra=139.427012288, dec=44.1006868066, elpetro_absmag_g_r=1.75845336914062, elpetro_ba=0.749366, z=0.0323374), ResultRow(mangaid='1-137845', plate=8250, plateifu='8250-9101', ifu_name='9101', ra=139.308858804, dec=44.4891619278, elpetro_absmag_g_r=1.76508140563965, elpetro_ba=0.744394, z=0.0320271), ResultRow(mangaid='1-137845', plate=8249, plateifu='8249-6104', ifu_name='6104', ra=139.308858804, dec=44.4891619278, elpetro_absmag_g_r=1.76508140563965, elpetro_ba=0.744394, z=0.0320271), ResultRow(mangaid='1-137853', plate=8250, plateifu='8250-3702', ifu_name='3702', ra=138.935541667, dec=44.2360887374, elpetro_absmag_g_r=1.66989135742188, elpetro_ba=0.86344, z=0.0321364), ResultRow(mangaid='1-137853', plate=8249, plateifu='8249-12705', ifu_name='12705', ra=138.935541667, dec=44.2360887374, elpetro_absmag_g_r=1.66989135742188, elpetro_ba=0.86344, z=0.0321364), ResultRow(mangaid='1-137870', plate=8247, plateifu='8247-12704', ifu_name='12704', ra=136.730098431, dec=44.121516356, elpetro_absmag_g_r=0.911367416381836, elpetro_ba=0.883403, z=0.0494434), ResultRow(mangaid='1-137875', plate=8249, plateifu='8249-6102', ifu_name='6102', ra=137.335924379, dec=45.0655135856, elpetro_absmag_g_r=0.834562301635742, elpetro_ba=0.943022, z=0.0510126), ResultRow(mangaid='1-137883', plate=8249, plateifu='8249-3704', ifu_name='3704', ra=137.874763008, dec=45.4683204593, elpetro_absmag_g_r=1.44910621643066, elpetro_ba=0.802596, z=0.0268253), ResultRow(mangaid='1-137890', plate=8249, plateifu='8249-1901', ifu_name='1901', ra=137.219338724, dec=44.9322670576, elpetro_absmag_g_r=1.19685363769531, elpetro_ba=0.549448, z=0.0265684), ResultRow(mangaid='1-137898', plate=8249, plateifu='8249-12702', ifu_name='12702', ra=137.562412054, dec=44.6841342226, elpetro_absmag_g_r=0.960855484008789, elpetro_ba=0.379162, z=0.0346482), ResultRow(mangaid='1-137908', plate=8249, plateifu='8249-12703', ifu_name='12703', ra=139.55919103, dec=45.6516888989, elpetro_absmag_g_r=1.1392650604248, elpetro_ba=0.700622, z=0.0269041), ResultRow(mangaid='1-137912', plate=8250, plateifu='8250-12703', ifu_name='12703', ra=139.647743513, dec=44.5967370112, elpetro_absmag_g_r=0.227899551391602, elpetro_ba=0.779483, z=0.014213), ResultRow(mangaid='1-137915', plate=8249, plateifu='8249-1902', ifu_name='1902', ra=139.797122285, dec=45.3665231283, elpetro_absmag_g_r=1.49036026000977, elpetro_ba=0.961667, z=0.031543), ResultRow(mangaid='1-137961', plate=8249, plateifu='8249-3703', ifu_name='3703', ra=139.720468628, dec=45.7277823533, elpetro_absmag_g_r=1.23330879211426, elpetro_ba=0.895169, z=0.026438), ResultRow(mangaid='1-138021', plate=8252, plateifu='8252-12705', ifu_name='12705', ra=145.443221426, dec=46.9738383647, elpetro_absmag_g_r=1.2242603302002, elpetro_ba=0.853881, z=0.0255975), ResultRow(mangaid='1-138034', plate=8252, plateifu='8252-3701', ifu_name='3701', ra=144.846118089, dec=47.1268642387, elpetro_absmag_g_r=1.54246711730957, elpetro_ba=0.52102, z=0.027267), ResultRow(mangaid='1-138087', plate=8252, plateifu='8252-12701', ifu_name='12701', ra=144.23925577, dec=48.2941162265, elpetro_absmag_g_r=0.339872360229492, elpetro_ba=0.643171, z=0.0249804), ResultRow(mangaid='1-138102', plate=8252, plateifu='8252-6102', ifu_name='6102', ra=144.557956402, dec=48.3883017672, elpetro_absmag_g_r=1.02695465087891, elpetro_ba=0.739386, z=0.0257882), ResultRow(mangaid='1-138105', plate=8252, plateifu='8252-6101', ifu_name='6101', ra=144.617048762, dec=48.5255082955, elpetro_absmag_g_r=1.2266845703125, elpetro_ba=0.738417, z=0.0248735), ResultRow(mangaid='1-138106', plate=8252, plateifu='8252-3703', ifu_name='3703', ra=144.352308981, dec=48.5154530802, elpetro_absmag_g_r=1.10024833679199, elpetro_ba=0.806873, z=0.0243491), ResultRow(mangaid='1-138140', plate=8252, plateifu='8252-3704', ifu_name='3704', ra=145.308121958, dec=47.6885981864, elpetro_absmag_g_r=1.4588623046875, elpetro_ba=0.869377, z=0.0467992), ResultRow(mangaid='1-138157', plate=8252, plateifu='8252-9102', ifu_name='9102', ra=145.541530882, dec=48.0128634742, elpetro_absmag_g_r=0.744720458984375, elpetro_ba=0.630656, z=0.0561577), ResultRow(mangaid='1-138164', plate=8252, plateifu='8252-1902', ifu_name='1902', ra=146.091838441, dec=47.459850984, elpetro_absmag_g_r=1.3321475982666, elpetro_ba=0.917753, z=0.0258991), ResultRow(mangaid='1-147394', plate=8250, plateifu='8250-12705', ifu_name='12705', ra=140.39879069, dec=43.2572462761, elpetro_absmag_g_r=0.938104629516602, elpetro_ba=0.255031, z=0.0160493), ResultRow(mangaid='1-147475', plate=8453, plateifu='8453-12704', ifu_name='12704', ra=153.13479279, dec=46.6953613957, elpetro_absmag_g_r=0.833671569824219, elpetro_ba=0.885371, z=0.0381522), ResultRow(mangaid='1-147488', plate=8453, plateifu='8453-1902', ifu_name='1902', ra=153.21425096, dec=46.9128221111, elpetro_absmag_g_r=1.23658561706543, elpetro_ba=0.452716, z=0.0241526), ResultRow(mangaid='1-147496', plate=8453, plateifu='8453-6102', ifu_name='6102', ra=153.213639346, dec=47.2949237539, elpetro_absmag_g_r=0.782651901245117, elpetro_ba=0.763455, z=0.0395361), ResultRow(mangaid='1-147507', plate=8453, plateifu='8453-6101', ifu_name='6101', ra=152.773273523, dec=46.8995324281, elpetro_absmag_g_r=1.36958885192871, elpetro_ba=0.591953, z=0.0250793), ResultRow(mangaid='1-147514', plate=8453, plateifu='8453-12701', ifu_name='12701', ra=151.309949901, dec=46.6508890341, elpetro_absmag_g_r=1.22234153747559, elpetro_ba=0.814928, z=0.0251003), ResultRow(mangaid='1-147521', plate=8453, plateifu='8453-3702', ifu_name='3702', ra=152.545357653, dec=46.9522671141, elpetro_absmag_g_r=1.49322319030762, elpetro_ba=0.459684, z=0.0253024), ResultRow(mangaid='1-147522', plate=8453, plateifu='8453-9102', ifu_name='9102', ra=152.514716814, dec=47.1209306545, elpetro_absmag_g_r=1.25778961181641, elpetro_ba=0.923625, z=0.0653628), ResultRow(mangaid='1-147537', plate=8453, plateifu='8453-12702', ifu_name='12702', ra=151.547771122, dec=47.2950386608, elpetro_absmag_g_r=1.23126220703125, elpetro_ba=0.554456, z=0.0381068), ResultRow(mangaid='1-147602', plate=8453, plateifu='8453-6103', ifu_name='6103', ra=151.729558675, dec=47.9841111295, elpetro_absmag_g_r=1.75717353820801, elpetro_ba=0.932527, z=0.067855), ResultRow(mangaid='1-147649', plate=8453, plateifu='8453-9101', ifu_name='9101', ra=152.046182936, dec=47.5174726058, elpetro_absmag_g_r=1.00096130371094, elpetro_ba=0.351228, z=0.0384484), ResultRow(mangaid='1-147685', plate=8452, plateifu='8452-12702', ifu_name='12702', ra=156.044276918, dec=47.5239549356, elpetro_absmag_g_r=1.55574607849121, elpetro_ba=0.233187, z=0.0425735), ResultRow(mangaid='1-147787', plate=8453, plateifu='8453-6104', ifu_name='6104', ra=154.119427243, dec=47.3648162968, elpetro_absmag_g_r=1.19962120056152, elpetro_ba=0.395606, z=0.0403757), ResultRow(mangaid='1-147815', plate=8453, plateifu='8453-1901', ifu_name='1901', ra=153.365546207, dec=47.516235898, elpetro_absmag_g_r=1.38706016540527, elpetro_ba=0.598532, z=0.0253396), ResultRow(mangaid='1-147863', plate=8453, plateifu='8453-12703', ifu_name='12703', ra=153.685061429, dec=48.689638952, elpetro_absmag_g_r=1.26955413818359, elpetro_ba=0.527282, z=0.0632026), ResultRow(mangaid='1-148046', plate=8452, plateifu='8452-1902', ifu_name='1902', ra=157.77930272, dec=48.0148303874, elpetro_absmag_g_r=0.790740966796875, elpetro_ba=0.925427, z=0.058703), ResultRow(mangaid='1-148068', plate=8452, plateifu='8452-12703', ifu_name='12703', ra=156.805684986, dec=48.2447914261, elpetro_absmag_g_r=1.28773880004883, elpetro_ba=0.805928, z=0.0609631), ResultRow(mangaid='1-148127', plate=8452, plateifu='8452-3702', ifu_name='3702', ra=156.298016415, dec=47.7390794143, elpetro_absmag_g_r=1.83721733093262, elpetro_ba=0.850507, z=0.0621072), ResultRow(mangaid='1-155337', plate=8249, plateifu='8249-12701', ifu_name='12701', ra=136.156282887, dec=44.874731539, elpetro_absmag_g_r=1.0489559173584, elpetro_ba=0.426153, z=0.0345388), ResultRow(mangaid='1-155440', plate=8249, plateifu='8249-9101', ifu_name='9101', ra=136.476492743, dec=46.259107066, elpetro_absmag_g_r=1.50351715087891, elpetro_ba=0.678623, z=0.0518655), ResultRow(mangaid='1-155456', plate=8249, plateifu='8249-6103', ifu_name='6103', ra=136.793850517, dec=46.2111457117, elpetro_absmag_g_r=1.64328575134277, elpetro_ba=0.962315, z=0.040334), ResultRow(mangaid='1-155463', plate=8249, plateifu='8249-6101', ifu_name='6101', ra=137.562456488, dec=46.2932696556, elpetro_absmag_g_r=1.21530723571777, elpetro_ba=0.558139, z=0.026734), ResultRow(mangaid='1-155541', plate=8249, plateifu='8249-9102', ifu_name='9102', ra=138.37190266, dec=46.6142215927, elpetro_absmag_g_r=1.68683815002441, elpetro_ba=0.544517, z=0.0802487), ResultRow(mangaid='1-155558', plate=8249, plateifu='8249-3702', ifu_name='3702', ra=137.03265263, dec=45.9209619515, elpetro_absmag_g_r=1.2888126373291, elpetro_ba=0.589713, z=0.0267975), ResultRow(mangaid='1-155903', plate=8439, plateifu='8439-1901', ifu_name='1901', ra=141.190236455, dec=49.4448016737, elpetro_absmag_g_r=1.11646842956543, elpetro_ba=0.969302, z=0.0163661), ResultRow(mangaid='1-155926', plate=8439, plateifu='8439-12702', ifu_name='12702', ra=141.539307103, dec=49.3102016203, elpetro_absmag_g_r=1.5238151550293, elpetro_ba=0.796842, z=0.0269288), ResultRow(mangaid='1-155975', plate=8439, plateifu='8439-6102', ifu_name='6102', ra=142.778167545, dec=49.0797456578, elpetro_absmag_g_r=1.39241409301758, elpetro_ba=0.725726, z=0.0339319), ResultRow(mangaid='1-155978', plate=8439, plateifu='8439-12701', ifu_name='12701', ra=143.010196099, dec=48.551093077, elpetro_absmag_g_r=0.783824920654297, elpetro_ba=0.526699, z=0.0162666), ResultRow(mangaid='1-156011', plate=8252, plateifu='8252-3702', ifu_name='3702', ra=144.059863049, dec=48.7456976861, elpetro_absmag_g_r=1.76472663879395, elpetro_ba=0.842447, z=0.0905527), ResultRow(mangaid='1-156037', plate=8439, plateifu='8439-9102', ifu_name='9102', ra=143.754018642, dec=48.9767418599, elpetro_absmag_g_r=0.751018524169922, elpetro_ba=0.550243, z=0.0249582), ResultRow(mangaid='1-156061', plate=8439, plateifu='8439-1902', ifu_name='1902', ra=143.697034579, dec=48.7475756651, elpetro_absmag_g_r=1.58340835571289, elpetro_ba=0.859392, z=0.0259393), ResultRow(mangaid='1-156062', plate=8439, plateifu='8439-12705', ifu_name='12705', ra=143.288053477, dec=49.0503236816, elpetro_absmag_g_r=1.47800445556641, elpetro_ba=0.844666, z=0.0511487), ResultRow(mangaid='1-156074', plate=8439, plateifu='8439-6101', ifu_name='6101', ra=143.184618775, dec=48.7963482386, elpetro_absmag_g_r=0.928119659423828, elpetro_ba=0.571587, z=0.0263866), ResultRow(mangaid='1-156137', plate=8439, plateifu='8439-12704', ifu_name='12704', ra=144.031088241, dec=50.4392201284, elpetro_absmag_g_r=1.18128204345703, elpetro_ba=0.444969, z=0.0640375), ResultRow(mangaid='1-156154', plate=8439, plateifu='8439-9101', ifu_name='9101', ra=142.713904348, dec=50.3188614584, elpetro_absmag_g_r=0.718753814697266, elpetro_ba=0.439643, z=0.0379614), ResultRow(mangaid='1-166736', plate=8459, plateifu='8459-12702', ifu_name='12702', ra=147.585854164, dec=43.1455699673, elpetro_absmag_g_r=1.00078010559082, elpetro_ba=0.826408, z=0.0170809), ResultRow(mangaid='1-166738', plate=8459, plateifu='8459-12705', ifu_name='12705', ra=148.117076795, dec=42.8191413496, elpetro_absmag_g_r=0.993005752563477, elpetro_ba=0.917477, z=0.016087), ResultRow(mangaid='1-166739', plate=8459, plateifu='8459-12701', ifu_name='12701', ra=147.37898128, dec=42.1302903462, elpetro_absmag_g_r=1.65610313415527, elpetro_ba=0.772124, z=0.0718279), ResultRow(mangaid='1-166754', plate=8459, plateifu='8459-3704', ifu_name='3704', ra=147.32578151, dec=43.3517193284, elpetro_absmag_g_r=0.900096893310547, elpetro_ba=0.417579, z=0.0164167), ResultRow(mangaid='1-166889', plate=8459, plateifu='8459-9101', ifu_name='9101', ra=147.277688884, dec=44.0486811007, elpetro_absmag_g_r=0.859102249145508, elpetro_ba=0.525696, z=0.0156854), ResultRow(mangaid='1-166919', plate=8459, plateifu='8459-3702', ifu_name='3702', ra=146.709100143, dec=43.4238429596, elpetro_absmag_g_r=1.31706047058105, elpetro_ba=0.866956, z=0.0722105), ResultRow(mangaid='1-166930', plate=8459, plateifu='8459-6103', ifu_name='6103', ra=146.789027825, dec=43.4185743942, elpetro_absmag_g_r=1.5174388885498, elpetro_ba=0.550614, z=0.0720255), ResultRow(mangaid='1-166932', plate=8459, plateifu='8459-3701', ifu_name='3701', ra=146.785813609, dec=43.5104758987, elpetro_absmag_g_r=1.91670417785645, elpetro_ba=0.951709, z=0.0724488), ResultRow(mangaid='1-166947', plate=8459, plateifu='8459-3703', ifu_name='3703', ra=147.335, dec=43.44299, elpetro_absmag_g_r=1.58527755737305, elpetro_ba=0.921915, z=0.0719792), ResultRow(mangaid='1-166969', plate=8459, plateifu='8459-6102', ifu_name='6102', ra=147.990674372, dec=43.4140430617, elpetro_absmag_g_r=0.653806686401367, elpetro_ba=0.921856, z=0.0158773), ResultRow(mangaid='1-167013', plate=8459, plateifu='8459-9102', ifu_name='9102', ra=149.888880629, dec=43.6605000576, elpetro_absmag_g_r=0.906030654907227, elpetro_ba=0.779596, z=0.0170491), ResultRow(mangaid='1-167044', plate=8459, plateifu='8459-6104', ifu_name='6104', ra=149.346878642, dec=44.1547632349, elpetro_absmag_g_r=1.66348838806152, elpetro_ba=0.938967, z=0.0741969), ResultRow(mangaid='1-167067', plate=8459, plateifu='8459-1902', ifu_name='1902', ra=148.502535855, dec=43.0448001127, elpetro_absmag_g_r=1.06909561157227, elpetro_ba=0.766896, z=0.0169612), ResultRow(mangaid='1-167075', plate=8459, plateifu='8459-12704', ifu_name='12704', ra=147.604836276, dec=44.0406378719, elpetro_absmag_g_r=0.854578018188477, elpetro_ba=0.62063, z=0.0158584), ResultRow(mangaid='1-167079', plate=8459, plateifu='8459-1901', ifu_name='1901', ra=147.801793989, dec=44.0093089046, elpetro_absmag_g_r=1.34856986999512, elpetro_ba=0.777813, z=0.015711), ResultRow(mangaid='1-167080', plate=8459, plateifu='8459-6101', ifu_name='6101', ra=147.712302507, dec=44.0304545816, elpetro_absmag_g_r=1.1823673248291, elpetro_ba=0.809313, z=0.0463805), ResultRow(mangaid='1-167113', plate=8459, plateifu='8459-12703', ifu_name='12703', ra=148.84161359, dec=44.4405591163, elpetro_absmag_g_r=1.2220287322998, elpetro_ba=0.415025, z=0.0264594), ResultRow(mangaid='1-167380', plate=8453, plateifu='8453-3701', ifu_name='3701', ra=153.231920862, dec=46.4177099017, elpetro_absmag_g_r=1.58125877380371, elpetro_ba=0.478194, z=0.0382131), ResultRow(mangaid='1-167555', plate=8453, plateifu='8453-3703', ifu_name='3703', ra=153.752608461, dec=46.7567528969, elpetro_absmag_g_r=1.26830101013184, elpetro_ba=0.759023, z=0.0246439), ResultRow(mangaid='1-167564', plate=8453, plateifu='8453-12705', ifu_name='12705', ra=153.034483163, dec=46.2936923797, elpetro_absmag_g_r=1.14142608642578, elpetro_ba=0.399407, z=0.024247)] ###Markdown The Query Datamodel shows you every parameter that is available to search on. It groups parameters together into common types. ###Code qdm = q.datamodel qdm qdm.groups # look at all the available NSA parameters qdm.groups['nsa'].parameters ###Output _____no_output_____
P3_Landmark_Detection_And_Tracking(SLAM)/3. Landmark Detection and Tracking.ipynb
###Markdown Project 3: Implement SLAM --- Project OverviewIn this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem. Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`. > `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the worldYou can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:```mu = matrix([[Px0], [Py0], [Px1], [Py1], [Lx0], [Ly0], [Lx1], [Ly1]])```You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector. Generating an environmentIn a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.--- Create the worldUse the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds! `data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`. Helper functionsYou will be working with the `robot` class that may look familiar from the first notebook, In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook. ###Code import numpy as np from helpers import make_data # your implementation of slam should work with the following inputs # feel free to change these input values and see how it responds! # world parameters num_landmarks = 5 # number of landmarks N = 20 # time steps world_size = 100.0 # size of world (square) # robot parameters measurement_range = 50.0 # range at which we can sense landmarks motion_noise = 2.0 # noise in robot motion measurement_noise = 2.0 # noise in the measurements distance = 20.0 # distance by which robot (intends to) move each iteratation # make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance) ###Output Landmarks: [[26, 58], [39, 70], [89, 43], [24, 69], [51, 31]] Robot: [x=32.79727 y=42.38551] ###Markdown A note on `make_data`The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:1. Instantiating a robot (using the robot class)2. Creating a grid world with landmarks in it**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:```measurement = data[i][0]motion = data[i][1]``` ###Code # print out some stats about the data time_step = 0 print('Example measurements: \n', data[time_step][0]) print('\n') print('Example motion: \n', data[time_step][1]) ###Output Example measurements: [[0, -22.10443774904035, 9.895562250959651], [1, -12.712344731476009, 18.28765526852399], [2, 38.76228966685868, -7.237710333141322], [3, -24.182278763542453, 20.817721236457547], [4, 2.32825147179834, -17.67174852820166]] Example motion: [12.658399042468488, -15.484344793423867] ###Markdown Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam. Initialize ConstraintsOne of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.You may also choose to create two of each omega and xi (one for x and one for y positions). TODO: Write a function that initializes omega and xiComplete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!* ###Code def initialize_constraints(N, num_landmarks, world_size): ''' This function takes in a number of time steps N, number of landmarks, and a world_size, and returns initialized constraint matrices, omega and xi.''' ## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable rows, cols = (2*N + 2*num_landmarks), (2*N + 2*num_landmarks) ## TODO: Define the constraint matrix, Omega, with two initial "strength" values omega = np.zeros((rows, cols)) ## for the initial x, y location of our robot omega[0,0] = 1.0 omega[1,1] = 1.0 ## TODO: Define the constraint *vector*, xi ## you can assume that the robot starts out in the middle of the world with 100% confidence xi = np.zeros((rows, 1)) xi[0, 0] = world_size/2.0 xi[1, 0] = world_size/2.0 return omega, xi ###Output _____no_output_____ ###Markdown Test as you goIt's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`. ###Code # import data viz resources import matplotlib.pyplot as plt from pandas import DataFrame import seaborn as sns %matplotlib inline # define a small N and world_size (small for ease of visualization) N_test = 5 num_landmarks_test = 2 small_world = 10 # initialize the constraints initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world) # define figure size plt.rcParams["figure.figsize"] = (10,7) # display omega sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5) # define figure size plt.rcParams["figure.figsize"] = (1,7) # display xi sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5) ###Output _____no_output_____ ###Markdown --- SLAM inputs In addition to `data`, your slam function will also take in:* N - The number of time steps that a robot will be moving and sensing* num_landmarks - The number of landmarks in the world* world_size - The size (w/h) of your world* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise` A note on noiseRecall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`. TODO: Implement Graph SLAMFollow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation! Updating with motion and measurementsWith a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!** ###Code ## TODO: Complete the code to implement SLAM ## slam takes in 6 arguments and returns mu, ## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise): ## TODO: Use your initilization to create constraint matrices, omega and xi omega, xi = initialize_constraints(N, num_landmarks, world_size) ## TODO: Iterate through each time step in the data #print(len(data)) for step_i, data_point in enumerate(data): idx_x = 2*step_i idx_y = 2*step_i+1 ## get all the motion and measurement data as you iterate ## TODO: update the constraint matrix/vector to account for all *measurements* ## this should be a series of additions that take into account the measurement noise for lm_i, lm_point in enumerate(data_point[0]): measurement_idx, measurement_x, measurement_y = lm_point[0],lm_point[1],lm_point[2] lm_idx_x = 2 * N + 2* measurement_idx lm_idx_y = 2 * N + 2* measurement_idx+1 omega[idx_x, idx_x] += 1.0/measurement_noise omega[idx_x, lm_idx_x] += -1.0/measurement_noise omega[lm_idx_x, idx_x] += -1.0/measurement_noise omega[lm_idx_x, lm_idx_x] += 1.0/measurement_noise omega[idx_y, idx_y] += 1.0/measurement_noise omega[idx_y, lm_idx_y] += -1.0/measurement_noise omega[lm_idx_y, idx_y] += -1.0/measurement_noise omega[lm_idx_y, lm_idx_y] += 1.0/measurement_noise xi[idx_x, 0] += -measurement_x/measurement_noise xi[lm_idx_x, 0] += measurement_x/measurement_noise xi[idx_y, 0] += -measurement_y/measurement_noise xi[lm_idx_y, 0] += measurement_y/measurement_noise ## TODO: update the constraint matrix/vector to account for all *motion* and motion noise motion_x, motion_y = data_point[1][0],data_point[1][1] omega[idx_x, idx_x] += 1.0/motion_noise omega[idx_x, idx_x + 2] += -1.0/motion_noise omega[idx_x+2, idx_x] += -1.0/motion_noise omega[idx_x+2, idx_x+2] += 1.0/motion_noise omega[idx_y, idx_y] += 1.0/motion_noise omega[idx_y, idx_y + 2] += -1.0/motion_noise omega[idx_y+2, idx_y] += -1.0/motion_noise omega[idx_y+2,idx_y+2] += 1.0/motion_noise xi[idx_x, 0] += -motion_x/motion_noise xi[idx_x+2, 0] += motion_x/motion_noise xi[idx_y, 0] += -motion_y/motion_noise xi[idx_y+2, 0] += motion_y/motion_noise ## TODO: After iterating through all the data ## Compute the best estimate of poses and landmark positions ## using the formula, omega_inverse * Xi mu = np.dot(np.linalg.inv(omega),xi) return mu # return `mu` ###Output _____no_output_____ ###Markdown Helper functionsTo check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists. Then, we define a function that nicely print out these lists; both of these we will call, in the next step. ###Code # a helper function that creates a list of poses and of landmarks for ease of printing # this only works for the suggested constraint architecture of interlaced x,y poses def get_poses_landmarks(mu, N): # create a list of poses poses = [] for i in range(N): poses.append((mu[2*i].item(), mu[2*i+1].item())) # create a list of landmarks landmarks = [] for i in range(num_landmarks): landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item())) # return completed lists return poses, landmarks def print_all(poses, landmarks): print('\n') print('Estimated Poses:') for i in range(len(poses)): print('['+', '.join('%.3f'%p for p in poses[i])+']') print('\n') print('Estimated Landmarks:') for i in range(len(landmarks)): print('['+', '.join('%.3f'%l for l in landmarks[i])+']') ###Output _____no_output_____ ###Markdown Run SLAMOnce you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks! What to ExpectThe `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.With these values in mind, you should expect to see a result that displays two lists:1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length. Landmark LocationsIf you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement). ###Code # call your implementation of slam, passing in the necessary parameters mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise) # print out the resulting landmarks and poses if(mu is not None): # get the lists of poses and landmarks # and print them out poses, landmarks = get_poses_landmarks(mu, N) print_all(poses, landmarks) ###Output Estimated Poses: [50.000, 50.000] [61.502, 33.379] [73.818, 18.967] [85.671, 4.608] [89.263, 23.931] [93.312, 42.993] [98.500, 62.380] [85.439, 77.364] [73.872, 93.353] [74.821, 73.417] [76.253, 54.186] [77.026, 34.557] [76.418, 14.374] [94.371, 4.836] [75.731, 6.770] [56.550, 7.693] [36.737, 8.296] [16.595, 8.425] [23.484, 25.447] [30.952, 44.000] Estimated Landmarks: [27.774, 59.932] [39.352, 70.395] [89.267, 43.115] [25.112, 70.216] [51.743, 31.570] ###Markdown Visualize the constructed worldFinally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.** ###Code # import the helper function from helpers import display_world # Display the final world! # define figure size plt.rcParams["figure.figsize"] = (20,20) # check if poses has been created if 'poses' in locals(): # print out the last pose print('Last pose: ', poses[-1]) # display the last position of the robot *and* the landmark positions display_world(int(world_size), poses[-1], landmarks) ###Output Last pose: (30.951669309349796, 44.00021820190255) ###Markdown Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters. **Answer**: (Write your answer here.) True values robot pose: x=32.80 y=42.39 Estimated by SLAM: x = 30.95, y = 44.00 Diff between true vs estimated: x_diff=(32.8 - 30.95)=1.85, y_diff=42.39-44.00=-1.61 Error due to measurement and motion uncertinity. I ran multiple combinations, based on results. Increasing N, reduceing the error rate (N α 1/Error Rate). Higher noise senario increase N benifit for reducing error. Lower noise senario increasing N not much help after convergent. Initial probability distribution as a robot goes trough cycles of sensing then moving then sensing then moving, and so on! Each time a robot senses, it gains information about its environment, and everytime it moves, it loses some information due to motion uncertainty. In general, entropy measures the amount of uncertainty. Since the update step increases uncertainty, then entropy should increase. The measurement step decreases uncertainty, so entropy should decrease. TestingTo confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix. Submit your projectIf you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit! ###Code # Here is the data and estimated outputs for test case 1 test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]] ## Test Case 1 ## # Estimated Pose(s): # [50.000, 50.000] # [37.858, 33.921] # [25.905, 18.268] # [13.524, 2.224] # [27.912, 16.886] # [42.250, 30.994] # [55.992, 44.886] # [70.749, 59.867] # [85.371, 75.230] # [73.831, 92.354] # [53.406, 96.465] # [34.370, 100.134] # [48.346, 83.952] # [60.494, 68.338] # [73.648, 53.082] # [86.733, 38.197] # [79.983, 20.324] # [72.515, 2.837] # [54.993, 13.221] # [37.164, 22.283] # Estimated Landmarks: # [82.679, 13.435] # [70.417, 74.203] # [36.688, 61.431] # [18.705, 66.136] # [20.437, 16.983] ### Uncomment the following three lines for test case 1 and compare the output to the values above ### mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0) poses, landmarks = get_poses_landmarks(mu_1, 20) print_all(poses, landmarks) # Here is the data and estimated outputs for test case 2 test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]] ## Test Case 2 ## # Estimated Pose(s): # [50.000, 50.000] # [69.035, 45.061] # [87.655, 38.971] # [76.084, 55.541] # [64.283, 71.684] # [52.396, 87.887] # [44.674, 68.948] # [37.532, 49.680] # [31.392, 30.893] # [24.796, 12.012] # [33.641, 26.440] # [43.858, 43.560] # [54.735, 60.659] # [65.884, 77.791] # [77.413, 94.554] # [96.740, 98.020] # [76.149, 99.586] # [70.211, 80.580] # [64.130, 61.270] # [58.183, 42.175] # Estimated Landmarks: # [76.777, 42.415] # [85.109, 76.850] # [13.687, 95.386] # [59.488, 39.149] # [69.283, 93.654] ### Uncomment the following three lines for test case 2 and compare to the values above ### mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0) poses, landmarks = get_poses_landmarks(mu_2, 20) print_all(poses, landmarks) ###Output Estimated Poses: [50.000, 50.000] [69.181, 45.665] [87.743, 39.703] [76.270, 56.311] [64.317, 72.176] [52.257, 88.154] [44.059, 69.401] [37.002, 49.918] [30.924, 30.955] [23.508, 11.419] [34.180, 27.133] [44.155, 43.846] [54.806, 60.920] [65.698, 78.546] [77.468, 95.626] [96.802, 98.821] [75.957, 99.971] [70.200, 81.181] [64.054, 61.723] [58.107, 42.628] Estimated Landmarks: [76.779, 42.887] [85.065, 77.438] [13.548, 95.652] [59.449, 39.595] [69.263, 94.240]
DeepRL/DeepRL.ipynb
###Markdown Deep Reinforcement LearningIn this session, we will use Open AI Gym to implement Q-learning, a classic algorthim in RL, and Deep Q-Networks (DQN), its deep learning counterpart. Hopefully this will give us some intuition for what we gain by moving from tabular methods to using neural networks for function approximation. Setup First, let's install the necessary dependencies. This does not need to be run again if you restart the runtime, but it does if you factory reset the runtime. ###Code !sudo apt-get install -y xvfb ffmpeg !apt-get install x11-utils !pip install 'gym==0.17.1' !pip install 'pyglet==1.4.0' !pip install pyvirtualdisplay !pip install --upgrade tensorflow-probability !pip install imageio-ffmpeg ###Output Reading package lists... Done Building dependency tree Reading state information... Done ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1). The following NEW packages will be installed: xvfb 0 upgraded, 1 newly installed, 0 to remove and 25 not upgraded. Need to get 784 kB of archives. After this operation, 2,266 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 xvfb amd64 2:1.19.6-1ubuntu4.4 [784 kB] Fetched 784 kB in 0s (6,337 kB/s) debconf: unable to initialize frontend: Dialog debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 1.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Selecting previously unselected package xvfb. (Reading database ... 144568 files and directories currently installed.) Preparing to unpack .../xvfb_2%3a1.19.6-1ubuntu4.4_amd64.deb ... Unpacking xvfb (2:1.19.6-1ubuntu4.4) ... Setting up xvfb (2:1.19.6-1ubuntu4.4) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libxxf86dga1 Suggested packages: mesa-utils The following NEW packages will be installed: libxxf86dga1 x11-utils 0 upgraded, 2 newly installed, 0 to remove and 25 not upgraded. Need to get 209 kB of archives. After this operation, 711 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 libxxf86dga1 amd64 2:1.1.4-1 [13.7 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 x11-utils amd64 7.7+3build1 [196 kB] Fetched 209 kB in 0s (2,013 kB/s) Selecting previously unselected package libxxf86dga1:amd64. (Reading database ... 144575 files and directories currently installed.) Preparing to unpack .../libxxf86dga1_2%3a1.1.4-1_amd64.deb ... Unpacking libxxf86dga1:amd64 (2:1.1.4-1) ... Selecting previously unselected package x11-utils. Preparing to unpack .../x11-utils_7.7+3build1_amd64.deb ... Unpacking x11-utils (7.7+3build1) ... Setting up libxxf86dga1:amd64 (2:1.1.4-1) ... Setting up x11-utils (7.7+3build1) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... /sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link Requirement already satisfied: gym==0.17.1 in /usr/local/lib/python3.6/dist-packages (0.17.1) Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym==0.17.1) (1.3.0) Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym==0.17.1) (1.5.0) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym==0.17.1) (1.12.0) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym==0.17.1) (1.4.1) Requirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.6/dist-packages (from gym==0.17.1) (1.18.3) Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym==0.17.1) (0.16.0) Collecting pyglet==1.4.0 [?25l Downloading https://files.pythonhosted.org/packages/8a/2e/74069cfb668afcb29f0c7777c863d0b1d831accf61558f46cebf34bcfe07/pyglet-1.4.0-py2.py3-none-any.whl (1.0MB)  |████████████████████████████████| 1.0MB 6.3MB/s [?25hRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet==1.4.0) (0.16.0) Installing collected packages: pyglet Found existing installation: pyglet 1.5.0 Uninstalling pyglet-1.5.0: Successfully uninstalled pyglet-1.5.0 Successfully installed pyglet-1.4.0 Collecting pyvirtualdisplay Downloading https://files.pythonhosted.org/packages/69/ec/8221a07850d69fa3c57c02e526edd23d18c7c05d58ed103e3b19172757c1/PyVirtualDisplay-0.2.5-py2.py3-none-any.whl Collecting EasyProcess Downloading https://files.pythonhosted.org/packages/32/8f/88d636f1da22a3c573259e44cfefb46a117d3f9432e2c98b1ab4a21372ad/EasyProcess-0.2.10-py2.py3-none-any.whl Installing collected packages: EasyProcess, pyvirtualdisplay Successfully installed EasyProcess-0.2.10 pyvirtualdisplay-0.2.5 Requirement already up-to-date: tensorflow-probability in /usr/local/lib/python3.6/dist-packages (0.10.0rc0) Requirement already satisfied, skipping upgrade: gast>=0.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-probability) (0.3.3) Requirement already satisfied, skipping upgrade: cloudpickle>=1.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-probability) (1.3.0) Requirement already satisfied, skipping upgrade: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-probability) (1.12.0) Requirement already satisfied, skipping upgrade: decorator in /usr/local/lib/python3.6/dist-packages (from tensorflow-probability) (4.4.2) Requirement already satisfied, skipping upgrade: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow-probability) (1.18.3) Collecting imageio-ffmpeg [?25l Downloading https://files.pythonhosted.org/packages/0a/45/2472071095310b3e92010c051cbd2e3c655247ad9090851a86b8bfdcfbc5/imageio_ffmpeg-0.4.1-py3-none-manylinux2010_x86_64.whl (22.2MB)  |████████████████████████████████| 22.2MB 72.5MB/s [?25hInstalling collected packages: imageio-ffmpeg Successfully installed imageio-ffmpeg-0.4.1 ###Markdown Now let's import the packages we will be using. ###Code from __future__ import absolute_import, division, print_function import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import numpy as np import PIL.Image import pyvirtualdisplay import random import tensorflow as tf import gym from gym.spaces import Discrete from gym.spaces import Box from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam from collections import defaultdict from collections import deque from pyvirtualdisplay import Display print(tf.version.VERSION) print(gym.version.VERSION) ###Output 2.2.0-rc3 0.17.1 ###Markdown Introduction to Open AI GymGym provides you with a set of [environments](https://gym.openai.com/envs/classic_control). If we think of the classic RL framework schematic, Gym takes care of the environment, and you take care of the agent. ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA4wAAAFeCAYAAADOo5GvAAAgAElEQVR4AeydCxRdRXnvIfLScq0QSrlKbQqEUlAvrRCoqxVQean3WqsCFm/B2hYCxS67bqXALSAKBkQN1JrgA6lFoBaVRxKp5RUQUakkXpcCJhBCAJWgEB7yCGHu+m/8Tr4zZ+9z9nnu2fv8Zq1Zs8/es2e++c2cfeZ/ZvbMJgEHAQhAAAIQgAAEIAABCEAAAhDIIbBJzjlOQQACEIAABCAAAQhAAAIQgAAEAoKRRgABCEAAAhCAAAQgAAEIQAACuQQQjLlYOAkBCEAAAhCAAAQgAAEIQAACCEbaAAQgAAEIQAACEIAABCAAAQjkEkAw5mLhJAQgAAEIQAACEIAABCAAAQggGGkDEIAABCAAAQhAAAIQgAAEIJBLAMGYi4WTEIAABCAAAQhAAAIQgAAEIIBgpA1AAAIQgAAEIAABCEAAAhCAQC4BBGMuFk5CAAIQgAAEIAABCEAAAhCAAIKRNgABCEAAAhCAAAQgAAEIQAACuQQQjLlYOAkBCEAAAhCAAAQgAAEIQAACCEbaAAQgAAEIQAACEIAABCAAAQjkEkAw5mLhJAQgAAEIQAACEIAABCAAAQggGGkDEIAABCAAAQhAAAIQgAAEIJBLAMGYi4WTEIAABCAAAQhAAAIQgAAEIIBgpA1AAAIQgAAEIAABCEAAAhCAQC4BBGMuFk5CAAIQgAAEIAABCEAAAhCAAIKRNgABCEAAAhCAAAQgAAEIQAACuQQQjLlYOAkBCEAAAhCAAAQgAAEIQAACCEbaAAQgAAEIQAACEIAABCAAAQjkEkAw5mLhJAQgAAEIQAACEIAABCAAAQggGGkDEIAABCAAAQhAAAIQgAAEIJBLAMGYi4WTEIAABCAAAQhAAAIQgAAEIIBgpA1AAAIQgAAEIAABCEAAAhCAQC4BBGMuFk5CAAIQgAAEIAABCEAAAhCAAIKRNgABCEAAAhCAAAQgAAEIQAACuQQQjLlYOAkBCEAAAsMQ2LBhQ1i/fn3mn3322SD/9NNPZ/6pp54K+OYzeOaZZ4K81b/axPPPP5/5YdoW90IAAhCAwGQJIBgny5vcIAABCDSKgAkAE4cmBNetWxcefvjhzP/0pz8N8vfff3/m77vvvmB+9erVAd8MBlana9asCfIPPvhg5teuXRvkH3300fDkk09m3otICUkcBCAAAQikSwDBmG7dYBkEIACB5An0IxhvvfXWcNVVV4XPfe5z4ROf+ETmP/7xjwd8MxhYnX7yk58M8pdffnn41re+Fe69914EY/LfZAyEAAQgUEwAwVjMhisQgAAEIFBA4LnnngvyNqL4i1/8Isg/8MADmf/xj38clixZEk466aSwzz77hK222irMmDEjbLrppvgpYqA6l//t3/7t8L73vS9ceuml4Z577skEpEYdn3jiiczbiKP+gMBBAAIQgEBaBBCMadUH1kAAAhCoBYFugvGWW24JBx54YEsYbrLJJgEPA/uz4BWveEX41Kc+lYlGBGMtvu4YCQEITDkBBOOUNwCKDwEIQKAsAZt+qtGgxx9/PPP2fuKdd94Zbr/99vCOd7wjbL755plYRCQiEovagMTjq171qrB48eLsfUe98yqvRXL0TiPvNZb9VhIPAhCAwPgJIBjHz5gcIAABCDSCQDfB+I1vfCPsvPPOuUJx6623Dm9729vCWWedFb74xS+G66+/Ptx99934BjP4zne+E7761a+G888/Pxx//PFh1qxZuaPM+nPhYx/7WCYWEYyNeExQCAhAoIEEEIwNrFSKBAEIQGAcBOw9s8ceeywbFdJKmMuWLQtf+MIXwq//+q93CII999wzEw1aGdPEpoXjsI800yJgda1QI4bf+973wlFHHRU222yztrai0cb3vOc9YcWKFdl7sLYdh92fVqmwBgIQgMD0EUAwTl+dU2IIQAACAxHIE4yXXXZZ+LVf+7U2AbDDDjtkI4kSCer04yBgBNQevv/974c3vvGNbW3GRKMWTkIwGi1CCEAAAmkQQDCmUQ9YAQEIQCBZArbAjS1Qov32brvttnDNNdeE7bbbrq3jv9tuu4W77roLoZhsbaZhmP58OOaYY9rajkTjhz/84fDII49k3v6gSMNirIAABCAwvQQQjNNb95QcAhCAQFcCNiXQts6wBW60uM3Xvva1sPvuu7d1+Pfaa6/w85//vGuaXISAEVD7OuOMM9rakKarfuUrXwmrV69uLazEIjhGjBACEIBANQQQjNVwJ1cIQAACyRPoJhjf//73t3X0NQ31wQcfTL5MGJgWAbUxvdfoV1P9H//jfyAY06omrIEABKacAIJxyhsAxYcABCBQRGD9+vVBXu+VyWvrDPkrr7wyzJw5s9XJ16jQd7/73aJkOA+BrgQ09VSj0140nnnmmeFnP/tZ5p9++ukgj4MABCAAgWoIIBir4U6uEIAABJInUCQYjz322LbOvUaIcBAYhsC3vvWttjb1yle+EsE4DFDuhQAEIDBCAgjGEcIkKQhAAAJNImDvLt5///1B/tZbb838Tjvt1Orcb7XVVkGL4OAgMCyBww8/vNWuNNp4+eWXZ3t1Pv7449n7jLzLOCxh7ocABCAwGAEE42DcuAsCEIBA4wnkCUZNR/VTB9XJx0FgFARuuummtrb1N3/zNwjGUYAlDQhAAAJDEkAwDgmQ2yEAAQg0kYBGc9atW5d5bZMhv2TJko6tEL74xS82sfiUqSICfpuWPfbYIyxfvjw8/PDDmbcp0hWZRrYQgAAEppYAgnFqq56CQwACECgmUCQYDzzwwLZRoIceeqg4Ea5AoE8C8YqpCMY+ARIdAhCAwBgIIBjHAJUkIQABCNSdgEZz1q5dm3l12uUvu+yy8OpXv7olGLWVBg4CoyTw4Q9/uNW+NPX5iiuuCA888EDmWS11lKRJCwIQgEB5AgjG8qyICQEIQGBqCDzzzDPhJz/5Sea//e1vB/kLL7wwvOIVr2h16LUVAg4CoyTwhS98odW+JBgXLFgQVq1alflf/vKXQR4HAQhAAAKTJYBgnCxvcoMABCBQCwJFgvElL3lJq0P/J3/yJ7UoC0bWh8ANN9zQal8SjJ/4xCcQjPWpPiyFAAQaSgDB2NCKpVgQgAAEhiGg6X9r1qzJ/M033xzkNdrjV0g9+uijh8mCeyHQQSAWjOecc05YuXJl5p944okgj4MABCAAgckSQDBOlje5QQACEKgFAQRjLaqpcUYiGBtXpRQIAhBoAAEEYwMqkSJAAAIQGDUB7cG4evXqzN94441B/p/+6Z8YYRw1aNJrIxALxnnz5mVbumhbl8ceeyzzbTfwAQIQgAAExk4AwTh2xGQAAQhAoH4EEIz1q7MmWIxgbEItUgYIQKBpBBCMTatRygMBCEBgBAQkGO+9997MqxMvf9555zHCOAK2JFFMIBaMZ511Vrjzzjszv27duiCPgwAEIACByRJAME6WN7lBAAIQGAsB7ZkokTcqh2AcFUnS6YcAgrEfWsSFAAQgMBkCCMbJcCYXCEAAAmMlcNFFF4Xtt98+zJ07N1vRdNjM8gTj/PnzGWEcFiz3dyWAYOyKh4sQgAAEKiGAYKwEO5lCAAIQGC0B2/B80003DTNmzAi77rpr+MhHPpJtizFITtog3TZMv/7664M8gnEQktzTD4FYMJ555pnhjjvuyLxNSX3++ef7SZK4EIAABCAwJAEE45AAuR0CEIBACgRMMPp9Ek08HnjggeHiiy/ua8oqgjGFWp0+GxCM01fnlBgCEEifAIIx/TrCQghAAAI9CeQJRhOPEo7yL33pS7Mpq7feemvP9BCMPRERYQwEEIxjgEqSEIAABIYkgGAcEiC3QwACEEiBQDfBaMJRoY067r777kF73K1ZsybXfARjLhZOjpkAgnHMgEkeAhCAwAAEEIwDQOMWCEAAAqkRKCsYY/G4xRZbhEMPPTRcdtllbVNWEYyp1fB02INgnI56ppQQgEC9CCAY61VfmbW2EAXhqtaiHLCAxbS3gY997GNtK5h6Ydjr2KasapXVE044Idx2220BwVjDH4cGmIxgbEAlUgQIQKBxBBCMNarS008/PXsHSSsg4mFAG6AN+DYg0ddLGJa5blNWX/WqV4VTTjkl6H3H1FZJ1WjoOeec0+Zr9CjH1C4EEIxd4HAJAhCAQEUEEIwVge83W3WQRtUhLNNpJM4mI+l8wxGOdW4DeuZsvvnmYZ999gn/+I//GOJRzKOPPrrfR9nQ8R9++OGw7bbbZs9D2Wd+yZIlQ6dNAtUTQDBWXwdYAAEIQCAmgGCMiST6+dhjj0XAbIL4qLP4wPZ6t18Js6222qrtOVSFYNSfZ3lt6Ygjjkj06V1/s/xorviP0yEYx0mXtCEAAQgMRgDBOBi3id+ljlleJ4lz9e6EU3/UXx3awItf/OJw8MEHh+OPP77tOVSFYDzkkEPabDB+ErQafcSNnoCN4ioU/3E6BOM46ZI2BCAAgcEIIBgH4zbxuxCMCAvrGBPSFibVBvbaa6/sPcFFixaF6667LsyfP79NrE1aMN59991dp+aPe/Rr4g/+RDL07U1/HIzTIRjHSZe0IQABCAxGAME4GLeJ35UnGPfcc8/WYhS2KAXh9TC5HgbT+D344Ac/2CbmfCe/n+Mdd9wxW+zm+9//frjnnnsybzyrFoxahMeXRYLWf957770n/myehgw9YwTjNNQ4ZYQABCDQTgDB2M4j2U95gnH//fcPzz//PB4GtAHaQBhkH0YTAno38cgjjwzf+MY3woYNG7L2lOK2GrvsskubQFy2bFnbZ5VHo5C40RKwdqIQwThatqQGAQhAoA4EEIx1qKUQQpFgrIn5mAkBCIyZwCCCcb/99gsXXnhhWLduXetPBzMzNcGoVVC9cNFCYHIHHXRQ23kt0IIbLQHPHcE4WrakBgEIQKAOBBCMdaglBGNNagkzIVAdgbKCcdasWeHUU0/NRuJshkKe1akJxuOOO65NGOq9SrlLLrmk7bxGIUfhbr755rZ9HiVY40V14jj95qv7L7jgglY+egdz+fLlfSUT2xCPsMrmOI7KEsfzmeqaXxnVC8add9657ZrSHqXjHcZR0iQtCEAAAqMhgGAcDcexp8II49gRkwEEak2gm2Dceuutw1FHHZW932tTTnsVNiXBKNGjFTpNuGyzzTbZiKjKsHbt2tZ5u96v6DIWykdCaebMmWHGjBmtPR5tlVCd0/Ydlv7ZZ5/dFsfS6RZKjEn8dstj9uzZoewCPrENJuAsH9lcVBateJonHJWGldlzN77+mvIfpUMwjpImaUEAAhAYDQEE42g4jj0VBOPYEZMBBGpNIE8wdpty2quwKQlGiScTKwpPOumkNvMPO+ywtusSZP06iUAtmpMnkHzeur7ttttmgk5iyV/rlafKIaHYKw+lqThz5sxpidOitGMbJPbK5qM8ZI8JYMtDafhydTtGMBo1QghAAALNJYBgrEndIhhrUlGYCYGKCJhgtCmnq1ev7ngvsR/TUhKMEnJetGixG+/iaakSdP04CSbd4/PQsd6PnDdvXub1zqRGNi2OxFZsV7c8JeJioajpnRK/lkf8Pqbykl2xoPP5xIJRYtnno5VkfR7xyrLKI15dVqOOZpNCK7NC2eyv2Yimt2mYY0YYh6HHvRCAAATGQwDBOB6uI08VwThypCQIgUYRkIjS9hdlp5z2KnwqglHiJRYsse1501L1nl4Zp2mosfCTcFuxYkVLcNu7nmIbCyhvW1F+NsXT4kp4SuRaXVn6CpVvLBwlGmVnnosFo4lF5bF06dKOPJRnLLBlV8zL22R2K9SiN/5ank3DnEMwDkOPeyEAAQiMhwCCcTxcR54qgnHkSEkQAo0joI78qFwqgjHee3HBggW5RdQIoBc2etewjNM7i/4+jcB146hreYJLaeQ5CT2/HYiEnMR9rzzikUBxyHOxYJQdlkdefJ1T3hp19OU++eSTi6K3xWOV1EJMXIAABCDQWAL5v3CNLW59C4ZgrG/dYTkE6kggFcEYTxVduXJlLk6tmuoFkEbaikblfAJl0/f3SHDFo4BFgjEWpEWC16evY5XTl6dolDFPMN50001xch2f4z0suwlBb0e3eB2ZDHCCEcYBoHELBCAAgTETQDCOGfCokkcwjook6UAAAmUIpCAYNU3SixWJtCInEaeRNR9f7w12c3H6WjynrIsFqvLNc350Ue//dRtZjO+PF/NZvHhxHCXEglEjk2WdZyXbipyPh2AsosR5CEAAAs0lkP8L19zy1rZkCMbaVh2GQ6CWBFIQjJpW6sWKpoJ2c/G01Hgxl/heTcP06Zcd/VM6En7+Xh3HTovV+Diyrx8ne/z9EoexiwVjvIJsHN9/9mnn2W9xfbxJC0aNFNvWIIQvbJECBzg0sQ3suuuuwfbXtWcPYToEOn/h0rENSxwBBKODwSEEIDB2AlULRk0ntQVcTLBocZtuLm/UL2+fQUtD4sfSVtjvip/xtFRL18KFCxe2pa/P/bh4ewvtmxi7WDDmicr4Hvvsy67jIufjTVow+rw53qStPcEDHk1rA5tvvnm28FfRs4jz1REo/oWoziZyziGAYMyBwikIQGBsBKoWjBdccEFb57jM6JxG/TS10nei9A5hkYunsJZ559GnFQtOf03Hc+fObbNFn2VPWa8tMnxZ8sQaghHR4NsIx7SHurcBbRGFS48AgjG9Osm1CMGYi4WTEIDAmAhULRjjrS4OP/zwUkIrXl1U7xAWubhjVRSv6HwvwRhf14hpv97biGBEDPj2wDHtoYltAMFY9ItT7XkEY7X8S+eOYCyNiogQgMAICFQpGON3/9QpKiu08jpQRRvfx3H7xRYLwvj++HqcX7+fEYwIhH7bDPFpM3VrAwjG+Jckjc8IxjTqoacVCMaeiIgAAQiMkECVgjHee3HYDo+mdua5ON28ON3OxYIwjhtf17TaefPmDezzFv1p+pTUP/3TPw1nnnlm5v/5n/85yH/+858PF154IR4GtIGat4H4GazPCMb4lySNzwjGNOqhpxUIxp6IiAABCIyQQJWCMd4bUSt/9iO04u0olF6eG/Ydxvh9yTiPWDBqf0S9ZzmMj/NoumCUWPzRj36U+UcffTTIb9iwYSiGw/Dn3uHaL/zg59sAgjF+oqf7GcGYbt20WYZgbMPBBwhAYMwEqhKM2jvRdyJs70Lfyeh1HG96r/S052LsYkHXzyqpWn3V26nj2Enk+jj9rGAap1X0eRoE4x133BHk161bl3nVPw4CEKg/Af98tGNGGNOs185fuDTtnHqrEIxT3wQAAIGJEqhKMMZ7L0p0DeLixW+UbuzifRg1Fbasi1dxzROMl156aZtg1Cqp/TiJUr+iap6gRTD2Q5S4EIBASgRMJPoQwZhSDW20BcG4kUXSRwjGpKsH4yDQOAJVCEYJpHjvRY0WDuLiTe+VbrxtxuLFi9sEnaauxnHy8lYcrb7qOzl5gjEehSyaGpuXh85JwPrFfiRAY4dgjInwGQIQqAuB+BmqzwjGNGsPwZhmvXRYhWDsQMIJCEBgjASqEIzxqN1BBx00cAnzpqVquqt3mtoYv4dYZpQx3h/ROj0+bTuO36eMbbB4cShR6t/l1PuWa9eujaMFBGMHEk5AAAI1IWDPTh8iGNOsPARjmvXSYRWCsQMJJyAAgTESqEIwxqN2eauC9lNkCU7fEdHejrFTHj6ORvQkGvNGGnVOU1vjUVC7P05bn7XQjV1XWHYU85BDDmm7r2hq7qQFo4RrHpu8sg9y7oYbbmgrtxa94R3GQUhyDwTSJ+CfjXaMYEyz3hCMadZLh1UIxg4knIAABMZIYNKCMW/vxbwRtX6KHItBdUg0TdQ7jTJquwvrrCiUIJw5c2bQSKK9QygBN2PGjJZYlHCKRyd9uv5Yq7z69GfPnh3y3kfUPbJvzpw5rXx0n97HLFroZRKCMRbest+4FJXDl7+fYwRjP7SIC4F6E/DPRTtGMKZZpwjGNOulwyoEYwcSTkAAAmMkMGnBGE/zlIgb1klwWifEQgmd2EmMxaJO8f37g35UUWJx2bJlwa+y2m36bJ4olfiUMNRopokvE6Vmq0Ll1e09zkkIxjzhbWyU/ygdgnGUNEkLAmkT8M86O0YwpllnCMY066XDKgRjBxJOQAACfRK44oorwpo1a0rdNWnB6N/XU8dh0aJFpezsFSl+h1DTXvOcRN3SpUtDPJpmnRgLld6KFSuyJLxg1HE3168oVX4aWbS8itKehGDME7zGA8FYVDOchwAEehGw54gPEYy9qFVzHcFYDfe+c0Uw9o2MGyAAgYjAAQccELbYYovwP//n/wyXX355eOqpp6IYGz9OUjBqGqbe0fNeImUUTu8Q+nR13O0dPOUrkSbB6u/TZ533dvUjGFUW3VtGlGqqq0b1tEF9L6cpod7OfqaI+vt03M3J9ttvvz1o9Vl/Xz/5dUvfrjHCaCQIIdB8Al4o2jGCMc16RzCmWS8dViEYO5BwAgIQ6JPA/vvvn03RtOmE22+/fTjhhBPCbbfd1pHSJAWjMpcg8b7DoCFO+HR1XMbF9+Tdp+mi1snpZ49FpdVNlEoo5uVXZLe3tShO3nl/X9n8BrknL++icwjGIjKch0DzCNjz04cIxjTrGcGYZr10WIVg7EDCCQhAoE8CJhj9j7PEo96ne81rXhPOPffc8JOf/CRLddKCsc+iVB5do5Se48KFC/u2KRZfZUVb3xnV6AYEY40qC1MhMCQB/wy1YwTjkFDHdDuCcUxgR50sgnHUREkPAtNHIE8w2o+0QolHm7KqKZF33nlnWLVqVbj++uszP3/+/DaRpOdSnZ2mwtqCMwrL7pGoMiuuZ6dVXnHDE0AwDs+QFCBQFwL+GWrHCMY0aw/BmGa9dFiVmmBUJ1I/7HgY0Abq0wb23HPPNpFjP9BxaFNWtbXEUUcdlb23JtHYRMFoZVVYdo9EjS76RXq6rZDa8TDnRFcCCMaueLgIgUYRiH979BnBmGYVIxjTrJcOq1ISjFppUaMQvqPF8abw2BQGqX8P8n6ce52zMu20007hbW97W5vgrPsIox60WonUM9h777079mr0D2Qt8qJ9CP09WlgHNxoCCMbRcCQVCNSBgH+O2jGCMc2aQzCmWS8dVqUkGP/qr/6qrbNkX3LCTeCyCQya+j140YteFPbYY4+2Nt4EwSixF9eZ3unUnoh+uqr2iZRQ1DUfX/s34kZHAME4OpakBIHUCfhnqR0jGNOsNQRjmvXSYVVKgtHbog7VhRdeiIcBbaAGbWC33XZrEzv2A90t3HXXXcMxxxwT/v3f/z188pOfbLu/CYJRD1u9r+lXPBUPG1n1YcxJ20uwUE3Hz9VQJxCMQ+HjZgjUikD8TNVnBGOaVYhgTLNeOqzyIs2+YFrAogrnbbnnnnvalsLPW/WPc+3bBcADHlW1gV6L3tizZbvttstE4pVXXhn0Hb/uuusaueiNPT9VHw899FDQaGEsHI2JD4899thsT0LdhxstAQTjaHmSGgRSJuCfq3aMYEyzxhCMadZLh1VepNmXKgXBqMVvcBCAQD0IdBOMm222WXjLW94SvvzlL4cnn3wy8/p+N3mV1LjWJAC1B6I2qNeoo9+gXhvWL126NLuOUIzJje4zgnF0LEkJAqkTsP6sDxGMadYagjHNeumwCsHYgYQTEIBAnwTyBOOrXvWq7F29Bx54oDVbQMlO+z6MEoWx7xM30QcggGAcABq3QKCmBLxQtGMEY5qViWBMs146rEIwdiDhBAQg0CcBE4yacnr88ceH73znO4UjZtMuGPtES/QREUAwjggkyUCgBgRMJPoQwZhmxSEY06yXDqsQjB1IOAEBCPRJ4LTTTsumWmrKaa9plQjGPuESfSQEEIwjwUgiEKgFAS8U7RjBmGbVIRjTrJcOqxCMHUg4AQEI9EnApliWuQ3BWIYScUZNAME4aqKkB4F0CZhI9CGCMc36QjCmWS8dViEYO5BwAgIQGCMBCcZ777038+rEy8+fP7+R22qMESNJ90kgFoxnnXVWuPPOOzO/bt26II+DAASaQcALRTtGMKZZtwjGNOulwyoEYwcSTkAAAmMk8NRTTyEYx8iXpPMJIBjzuXAWAk0kYCLRhwjGNGsawZhmvXRYhWDsQMIJCEBgjASKBOOWW27ZGmV85zvfOUYLSHoaCSAYp7HWKfO0EvBC0Y4RjGm2BgRjmvXSYRWCsQMJJyAAgTESkGBcvXp15m+88cYgf/7554ff/M3fbAnGfffdd4wWkPQ0EvjXf/3XVvtSB/Lcc88Nd911V+Yfe+yxII+DAASaQcBEog8RjGnWLYIxzXrpsArB2IGEExCAwBgJFAnG2bNntzr0O+644xgtIOlpJPDRj3601b7UibzooosQjNPYECjzVBDwQtGOEYxpVj2CMc166bAKwdiBhBMQgMAYCUgwrlmzJvM33XRTkP/0pz8dNKpoP+wKH3/88TFaQdLTRuDYY49ta19f//rXw4oVKzKvtkZ7m7YWQXmbTMD/ltgxgjHNGkcwplkvHVYhGDuQcAICEBgjgSLBqPcW7Ydd4eWXXz5GK0h62gj81m/9Vqt9zZo1KyAYp60FUN5pIuB/S+wYwZhmC0AwplkvHVYhGDuQcAICEBgjgWeeeSY8+OCDmb/11luD/Oc+97mgbQ7sh13he9/73jFaQdLTRGDZsmVtbevII48M1157bVi1alXmn3zyySCPgwAEmkHA/5bYMYIxzbpFMKZZLx1WIRg7kHACAhAYI4EiwSjRuP3227c69ttss0145JFHxmgJSU8Lgblz57balTqP2vcTwTgttU85p5GAiUQfIhjTbAkIxjTrpcMqBGMHEk5AAAJjJPDss8+GtWvXZv72228P8l/60pcy/4Y3vKGtY/+BD3xgjJaQ9DQQuOOOO8Lmm2/ealcvfelLw9KlS8M3v/nN1ru0miYtj4MABJpBwAtFO0Ywplm3CMY06/BUpZ0AACAASURBVKXDqkMOOaT1Q2pfqiOOOKIj3iROePGqqUI4CECgeQQ2bNgQHn300cyrMy9/9dVXZ/7DH/5weMlLXtJ6Jr34xS/OFiVpHgVKNCkCBx54YKs96TfumGOOyf6k0B8VDz30UOb1J4Y8DgIQaAYB68/6EMGYZt0iGNOslw6r9t9//7YfU325JNyqcAjGKqiTJwQmS6CbYDznnHPCfvvt1/ZM+r3f+z1WsJxsFTUmt3/8x38Mm266aas9aeEb7cdoI9sIxsZUNQWBQBsBLxTtGMHYhiiZDwjGZKqiuyEIxu58uAoBCIyWwPPPP58tMKJFRu67777M2/Yan/3sZ8Pf//3fh9/4jd9odfL1Y3/ooYeG9evXj9YQUms0gYsvvrhNLKodnXnmmdnqqD/+8Y+D/Lp16zKvPzHkcRCAQDMImEj0IYIxzbpFMKZZLx1WIRg7kHACAhAYI4FegvHEE0/Mpg1uueWWLdGoUaI//uM/Dj/5yU/GaBlJN4XA6aef3iEW9aeDttKQRzA2paYpBwTyCXihaMcIxnxWVZ9FMFZdAyXzRzCWBEU0CEBgZATsnbGHH344yP/gBz/I/JVXXhk+/vGPZ/7tb397mDFjRks06kf/la98ZVi0aNHI7CChZhHQu+9qN34aqtrNHnvsES655JJw2223Zd62dbHFbvQnhjwOAhBoBgETiT5EMKZZtwjGNOulwyoEYwcSTkAAAmMmUEYwnnDCCUGLcr3oRS9qE40SAwcccEC2LcKYzST5mhBYs2ZN0Iq6WjApFou/+7u/G/7v//2/4dJLL0Uw1qQ+MRMCwxLwQtGOEYzDUh3P/QjG8XAdeaoIxpEjJUEIQKAHAXtnzDZMv//++4P8t7/97axjr879Rz/60cy/4x3vaFs51X78JQw04iihoPfVbrjhhmyU0jZjJ3xhU/omcrj55pvDFVdcEc4999zw+te/PhuJjoWi2ommMeu92AsvvDBrHytXrgzy9u7ic889F+RxEIBAswjY74QPEYxp1jGCMc166bAKwdiBhBMQgMCYCfQjGP/mb/4mHHXUUWHHHXdsG2m0joCEgnlNYcVPBwOrc2sHPvxv/+2/hXe9611Bq6RKLCIYx/yFJnkIJEbAPw/sGMGYWCX9yhwEY5r10mEVgrEDCScgAIEJEdDKp/I24nPPPfdkG6prU3W9cyavrTbkNZL4zne+M/z3//7fc4WjdQoIN5laPloo6aCDDsqE4uc///lMKF577bXZ9GXt92nvzD7zzDNBnncXJ/RFJxsITJhA3u8AgnHClVAyOwRjSVBVR0MwVl0D5A+B6SVgHXbrwP/iF78Id999d+a/9a1vBfmvfOUrmf/0pz8d/vmf/zl86EMfCkceeWTYd999w7bbbju14iivQzSN5zbffPPw6le/Ovzv//2/w7x588KXvvSl7I8GTVGWl1CU156LtsiNjXBP7zePkkOg2QTynoUIxjTrHMGYZr10WIVg7EDCCQhAYEIE+hWMEo1nnHFG+D//5/9kfu7cudl01be85S3hDW94QyYif//3fz/sueee+AYxUJ3K/8Ef/EHm9bv11re+NROJmrKsqafz58/PvI1MIxgn9CUmGwgkSADBmGClFJiEYCwAk9ppBGNqNYI9EJg+Al44Pvroo0H+vvvuy/zy5cuDvE0t/PKXv5wtZKLFTEwknH322dno0kc+8pEgL1GpkUh8/RmoLj/84Q9n/qyzzgryWuxGfsGCBZnXqKLtsfjd7343yNsCNzYNVaOLjCxO37OFEk8nAQRjfeodwViTukIw1qSiMBMCDSYwCsEo0XjmmWdmXgJDQgNffwaqS/sjwFbOtb06EYwNfihQNAgMQQDBOAS8Cd+KYJww8EGzQzAOSo77IACBUROQcLQ9Gh9//PEg/7Of/Szz9m6jRhtvueWWzNuo45IlS4L81VdfnXltuWD+a1/7WsDXj4HVn8Krrroq84sXLw7y3/jGNzJ/0003Bfnvfe974a677sr8Aw88EORtIaWnn346yGt0EQcBCEwHAQRjfeoZwViTukIw1qSiMBMCU0AAwVg/YTcuMY5gnIIvPEWEwJgIIBjHBHYMySIYxwB1HEkiGMdBlTQhAIFhCdj7ZjZC9NhjjwX5tWvXhvvvvz/ztin9j3/84yBvK2L+6Ec/CuZ/+MMfBnx9GagerV5tFNFGm+09V41C24iirYRqW7bYdOdh2yP3QwAC9SGAYKxRXdXH1Om2FME43fVP6SGQKgEEY31F3igFOoIx1W8odkEgXQIIxnTrJraMEcaYSKKfEYyJVgxmQQACbQRMQOodRxtFeuKJJ4K8jT7aKJOttKrwkUcewdeYgerQ6tXq2erd2oHahLUPRhTbvjZ8gMBUEkAw1qfaEYw1qSsEY00qCjMhAIGeBEwsKDQBQbih1ix8nfZsAESAAAQgEEJAMNanGSAYa1JXCMaaVBRmQgACPQl4cYFQrLdQtPrzddqzARABAhCAAIKxVm0AwViT6kIw1qSiMBMCEIAABCAAAQhAoCcBRhh7IkomAoIxmarobgiCsTsfrkIAAhCAAAQgAAEI1IcAgrFGdVUfU6fbUgTjdNc/pYcABCAAAQhAAAJNIoBgrE9tMsJYk7pCMNakojATAhCAAAQgAAEIQKAnAQRjT0TJREAwJlMV3Q1BMHbnw1UIQAACEIAABCAAgfoQQDDWqK7qY+p0W4pgnO76p/QQgAAEIAABCECgSQQQjPWpzUaMMJ5yyinhkEMOyfy40C9fvryVx2WXXTaubArTRTAWouECBCAAAQhAAAIQgEDNCCAY61NhjRCMBx98cNh0000zPy70N998cyuPs88+e1zZFKaLYCxEw4VfEbA/TfQHCg4CEIAABCAAAQikTADBmHLttNvWGMFoja69eKP7JMFoeSAYj26xWLVq1eggk9JQBOxPE/2BgoMABCAAAQhAAAIpE7B+tQ+/8IUvpGzy1NqGYCxZ9QjGjaCOPhrBuJFGOkf2wEUwplMnWAIBCEAAAhCAQD4B67f4EMGYz6rqswjGkjWAYNwICsG4kUVKR/bARTCmVCvYAgEIQAACEIBAHgHrt/gQwZhHqvpzSQnGhx9+OGhBmeOOO661wMwRRxwRLrjggnD33Xd30LJ3trbZZpvWFEk7V7QwjYTfOeec00pf8ZVfUR5KR3H23nvvVh677LJL6/4Oo351YsmSJcEvxqM8imwqSsOfnzVrVit/+2KddtppPsrEjhGME0PdV0bWLhCMfWEjMgQgAAEIQAACFRCwfosPEYwVVESJLJMRjBJYM2fObC0sY+9jWThjxoxMgPky2TXf0Oxc/J6hrXKqdCxOHOqaxKR3SkfxfB46tnt9XB0rn9mzZ7euWzwLdU1l7dc1RTB6EW0MJKT1x4CEeZHrR4Drjwf74yCuT0tff0BYnG75WpyihWSUjvIw+xVfxzqntlDk7I8Ixbd4+jPD/iyxc/5+y8tssj9TVF45a6MIRk+NYwhAAAIQgAAEUiRg/RYfIhhTrKkQkhCM6iibKNtrr73CJZdcEpYuXZr5BQsWhJ133jnrDCuO77hbHN1jjc3OrVy5skVcHWqNCloehx12WFi0aFErj5NOOinYKKXi+JFApaM0ZYflceyxx7bubWXyK7G47bbbZvGUntI1e7rl4dMoOm6KYJSYEWN5OQkk+2znPINBBbjajNJTvee5Sy+9tC3fPIGmc2bbySef3JaM2pRs7/UHhESdCTqfgP0RofTV/tXmfFo651183exSqD9aFN/aJ4LRk+MYAhCAAAQgAIEUCVi/xYcIxhRrKhHBaNM91clfu3ZteP7551u0dLxhw4Zw0EEHZR1idZDj6anqIFtja93oDjTd1K5LuCnNOI9ly5a14mgEJ3a+Qx6PXiquRIEXi0ovLw8vTONyxHn6z00SjFYXGoVTfdpnhd5JsHmm/QhwxbV08zjPnTu3dV3xJCBjt3DhwlacxYsXt12WEDTbe/3JIWEZO7Uhs8/Eo31W6AWjxKLlFf8RoT8vFF9i0+5HMMa0+QwBCEAAAhCAQGoErN/iQwRjarX0gj3tPfSKbLSG0q2jqxFBixd37nsJRi8O/MhjXFwbqcyzo5dglPgx+zRCWuTmzZvXipcnJIrua6JglAiKBZCVf1gBLoFn9RGLPeVhdW1x1EZi59uNHyW0kUfdq3T0h0b858BDDz3UNjIep+0Fo4lBiT8b+bb8ynBQe7NyKMxrv3H+fIYABCAAAQhAAAJVEvB9FztGMFZZI8V5JyUYNcJoHeXYZJ236Z3xiFEvwajRPrs3Ttd/tqmveR3uXoLRRsKUhhcPPn0dS7Dal6JoumR8jz43UTBKLMYjsVb2YQW42otxzptOatdsxDevLnRO8TS67Z3+sLD7u/054Ec5/f069oJRaSkdtZu47XgO+rMhz+keG4FXWnntN+8+zkEAAhCAAAQgAIGqCFhfyocIxqpqo3u+SQhG67SrwWh6ar+LwvQSjN0RhGz6n++Y53W4uwlGjThZY8+7N87flze+VvS5iYKxSACJwSgEuIkotSnvbPRR9eBFnf+zQn9KWJ3Gduqa/QGhKdRFzo9QxnG8YJSdsVC0+CZaZUu3vPwIfJk2aOkTQgACEIAABCAAgSoIWD/LhwjGKmqid55JCMZ4Sp2m6OmdLL1LKCHn3+fKK1JZwaiOvt5n1FRQpa3FQpSPLTZiDTavw91NMJoA0f0SIbaKZVHoBaPEZhnXRMFYND14VALcL1TkxaBGHFVXWvzIjxb6qav+vEZB+3GyX+8dmuhVXrHzgrFolNKLVk197eb8iGpe++12L9cgAAEIQAACEIDApAlYv9uHCMZJ10K5/Dp7suXuG2ksja5oxMZGhKzhSDiaeJS4k9DzHX8zooxg1OqqJgwtXeWjKaTK14805XW4uwlG3/lXmpZ+UWjlU9hLDFsZmygYrWxxOCoB7oWnF4PWXiQovShTPZqz0UGJ+yKntihhaFuFaMuUvD8gVM+x822mqA34NpfXJuM0rV2ViRvfy2cIQAACEIAABCAwSQLWb/FhnQSjBoas35fHrdf1vHtSPdfZk63QUgnHFStWZFtYaPTHj8SpMUmAaXphLBpNAChOnpPQ1L26rjQlFG6//fZssRJbsER5W4PN63D7zrsXFsrPd/79lhs2bbFbGJclz36dmybB6HlavReJb6tXq7tYfNl7qf49RrvHRnctjr7Y5mwqqNphnpNQjPcNtfalPyDUDixdnY+dL2Nss8X1bS6vTVo8C41Bmbh2DyEEIAABCEAAAhCogoD1W3xYJ8HYS3/0ul4F80Hz7OzJDprSCO+TeJOXmJOw0ztkXjxqmqp33SpEosAEgtLQ6pVKO89Zg83rcPvOeywY/YjY4Ycfnpf00OemVTAOK8B1v+rV3mP09WiVYnHUTuT8qKP+XIid3+ZCaWt0Wn8KqL36PyC6tUsEY0yVzxCAAAQgAAEITBMB63f7EMGYZguoXDCq823v+tmIT4xKAk/vkVmD8iNBitutY+730lPHvpuz9PsVjH7qY693zZS/TWHUXn5l3TQJxlEKcL8YjFibUNMooDnfRiQo/fuLee9Z+ncTJRSL/oDo1i7NDrU55ZnnvLjNa5PxPd3abxyXzxCAAAQgAAEIQKBKAtZv8WGTBKMNfmkwoe6ucsGoTrFGduT9e2Z5YK1BxZ3nsh3zeGTQ5+FHleL0Fc933vPS8dMPlVaR0zUrb97ef0X3TZNgHKUA15RfazeqQ/3ZoM9+5VOfn8SjLYqjOo2dj9vrz4Gy7VJ25TnfJjU63s359pnXfrvdyzUIQAACEIAABCAwaQLWP/NhkwTjpHmOM7/KBaM69DbdtNuIm+88+/fRBKdbx9yPFnWbLqq8rcHmdbh9hzxPMEqA2P3dyuHzyRu9KqrsaRKMYjBKAS5hp7pRvVlbi0WanVf9aPqq4muqaux8O8hrJz6+L4M/r2PZYu0ltsXH9Wl0i6dRa0uvl10+fY4hAAEIQAACEIBAFQSs3+LDQQSjzVb0ux/oWIME2h2hzHoh6mOpD+jTmDNnTrZbQ3y/LXSjASCz3c751+aUv50v4qu0dY/ysriyQbaoXN2cxbeZlyqD1m2JyyAGw7rKBaMKYCuUCnzeSqgCYJ14dexjoeUFY1yp2rvOxIDSj+ErbYH2lZ43cqR41ihiwaoyaNjZhInSUuV7W3TsF9/pNT02rthpE4yjFOCWlp9KGvPV4jaqXy/QNJ01dv6PC7WrIqcvp7UXhbErKxj9ljP2QIjTkk2+bAjGmBCfIQABCEAAAhBIjYDvJ9lxP4JRs760Qr363b4fb2nZecVR3CKn/rnEV1EaEmD+fks/L/SDSl6f5OUtbaG0i/LVefX9vJ7w6Vj+ysfvBmHnLVQ6KuMwrrMnO0xqA94rUefFlipNgORtqwIrdN6edX6/PcXXfV4Y+k63oFkcS1vnNJpki58orzgNFc0LT7PPFzkuhxqBxfMNUfkUvffm0/PH0yYYRynA/fuvqlv//qIx9m3I2lrRF1T3W5z4Cyjx5v8YsHhxWmUFY8xBafu0lJ/9mWJ5IRitVgkhAAEIQAACEEiVgPVbfNiPYPT9Hw3EaKcFW4BQi2baYIDSV388z/k+m+LbLgpaJFN9Q+v76495639ZHr4/aOd8/76bYJQAtT/7lYe0ivJUOiqHH0xTOS1vXwbjJh0jLy2lwQ6zRetseH3ltZFPp8xxEoJRhkpsaSTIKsYKbzCsEn1FWAG9UFN83esVvu4RQD96pDiKq3Rt4RIJC5+/T0N5xcJTacROlaRyWF6+HKo0pZFXhjid+PO0CUaV39erOA4jwK1eVed5I8R+BFlxVFdFLm4nZpdNJ5Ctut+LUH3Z9e+PubKCUfFjDv4PFR3LXt82EYxGmRACEIAABCAAgVQJqP8S+7KC0ffb1JfP61vrnBdMMQcJKPXZZENeGrrf9+X8dFOl1U0Q9rpu27cp75tuuqnDfuVtM+QUJx6gUPqencqp/mLMwQ+aaJrroC4ZwagCqJCmrCXizOtcDCAusOLoXwG7J562amn7OHnpSt1bGnGe+uyvK16ey8vL/jWI08y7P+/cNApGcVAdjUKA+9HjosWV/Bev15RhfQH9P0v2wNEXVuJNdnuhp7S9kOtHMIqD0vJlUH7y+mNCf4aoXZn9Pp+8tsQ5CEAAAhCAAAQgUDUB67f4sKxg9Cvc571CZGXzosvOWehFW6wbLI76VzbooD//vRtUMEqoWpnVtyty6vtZ3urzxaOMloZCic4iZ6JZI5qDuqQE46CFmIb7miIYJbRMkJett1EIcD0ILN/4C2d2+D8T9CXt5WSXpg1Yunl/Ckg42nWfZhl74vyVn//DwqZe6Lyc5SPGOAhAAAIQgAAEIJAyAS947LisYLS+Yd7gjy+zFry0tP15TQm18xJU3Zz/w97HG1QwaqTP8u4mdpWXzzueUmppaPCgm+tlZ7d77RqC0UgkHjZFMCaOGfMgAAEIQAACEIAABCZAwASPD8sKxiLztLaDpqtq8UH/fqLy8M6PUPaaVebFqU+jlxArum7vLsqmokEMy8fPSItflTNuyqebK7Kj2z3xtXZ68VU+J0MAwZhMVWAIBCAAAQhAAAIQgMCQBEzw+LBfwShxKGFo60hobQdbaNJeF7L0vbndhJiP1+24lxArum72KOzlutlp6SAYe1GcousIximqbIoKAQhAAAIQgAAEGk7ABI8P+xGMEoomDi0NrS+hBWz07qLWlLD393Tdu25CzMfrdlwkCO2eoutma2yT3efDbnZaOghGT2zKjxGMU94AKD4EIAABCEAAAhBoEAETPD4sKxi18ryNINoCgPY+o6aQmi8Sbd2EWFnERWnb/UXXfXktblHYzU5LB8FYRG8KzyMYp7DSKTIEIAABCEAAAhBoKAETPD4sKxhNLOpeLSQogZjnikRbNyGWl07euaK0LW7RdV9ei1sUais4i6/3Lr2z8whGT2XKjxGMU94AKD4EIAABCEAAAhBoEAETPD4sIxj9HoyagtrNFYk2bbFm+Wol1W5Oq5rau5E+XlHaFqfoup8mq9Vauzlt5WF2xnHtPIKxG8Epu4ZgnLIKp7gQgAAEIAABCECgwQRM8PiwX8HYTSxpBVLbx1B5eOevadXSotVKdd5WNdW7kd4VCUKLU3Rdq7JamTW1tshpxVeLl7d1hl3rxkBpF9lRlG/e+XZ6eTE4lwQBBGMS1YAREIAABCAAAQhAAAIjIGCCx4dlBKNEnN2Tt6G9maZFcSyewth54XbOOefEl7PPEnSWhhbR8a6XECu6rim0JmQlRuORQ8vD79cY5604ZheC0YgRBgQjjQACEIAABCAAAQhAoCkETPD4sIxgVPn9hvaatmmiS2JSG9zPnj07WxTHhJny0Iidd2vXrm2toirhKdFocRTqs70rqWmk8XuSmspqtlv+Pv0iwag4EoB2r0SjbLZRTuUtsWh5a9ptnLfSsPsRjJ76lB8jGKe8AVB8CEAAAhCAAAQg0CACJnh8WFYwerGn++0dQ7/NhkYQFy1a1BJWunbIIYe0EVy2bFmbaMxLR2JR+cXOiz67z49UdhOMEoALFixo2SZxaGko9GIxL2/ZYtwQjHHNTPFnBOMUVz5FhwAEIAABCEAAAg0jYILHh2UFo1BISGm/Rb3fZ2lI3EkorlixorW1hp96Onfu3A6KSkfizS9Go5FJvbMoUajtOvKcRJ/S9qOYWn3VXDfBqDi6X3ZqtNSXweedN7Jo6VuZEYxGhDDssMMOrS+DNZCPfvSjlZA5+uijW7asWrWqEhvIFAIQgAAEIAABCECgvgSsP+vDfgSjSi5BJUHnfSyyfJz4mtHzcSwtnSuKX3Sfj+/TtPh5oY/XT94+bl66ds6nb+f6DTvfAO03BeJPhID/Mtlxv1+qURmKYBwVSdKBAAQgAAEIQAAC00nA+rM+rKpvO501UL7UCMbyrCqN6b9MdlzVlwrBWGlTIHMIQAACEIAABCBQewLWn/VhVX3b2sMccwEQjGMGPKrk/ZfJjqv6UiEYR1WrpAMBCEAAAhCAAASmk4D1Z31YVd92OmugfKmnTjBqydolS5ZkS+VqJSMtY2tL6Bo2LY17880328ckQv9lsuOqvlQIxiSaBEZAAAIQgAAEIACB2hKw/qwPq+rb1hbihAyfKsEogThz5syg/U60SpJWMpLX/i3a3FMiUYJSnxcvXjyhKiiXjf8y2XFVX6pRCUaJddVJGW91U44WsSAAAQhAAAIQgAAEUiZg/VkfVtW3TZlTCrZNjWA85ZRTsj1NtGTuQw891Fpq11YOuvTSS1v7n6jhxqOOVVeW/zLZcVVfqlEJRjGXcJeA134z3pugP/nkk8Muu+zSqhvtn5O3OWrV9UP+EIAABCAAAQhAAALlCVh/1odV9W3LWz2dMadCMGp0yjbALNr8UsJR+6BYo02tOZhdPqzqSzUqwSjm8tqDxpdLot6uKdSywba5qepRAjO1KcOptRfsgQAEIAABCEAAAikT8H0/O66qb5sypxRsmwrBqFEpa4jdoC9btiyLd9BBB3WLVsk1s9+HVX2pRiUYDaRGc325JA5jJ+HoN1TVqCMOAhCAAAQgAAEIQKCeBHzfz46r6tvWk+DkrJ4KwWiji2qMvZziaBpkas6+SD6s6ks1asGoqam+XEVTTufNm9cWj1HG1Fop9kAAAhCAAAQgAIFyBHzfz46r6tuWs3h6Y/VWUA1gY41QoRZa6eY0urhw4cJuUSq55stgx1V9qUYtGPW+opVp5513LuSrBYosnkIEYyEqLkAAAhCAAAQgAIGkCfg+nR1X1bdNGlQCxk2dYNT7b90WtDn44IOTFCL2RfJhVV+qUQtG1YmVS++RFjkEYxEZzkMAAhCAAAQgAIF6EbC+nw+r6tvWi9zkrZ0KwahRQ98YtW2Gts/Ic5dccknhtbz4kzrn7bfjqr5UoxSMmn5q5VG4aNGiQqR+JFJxuwn/wkS4AAEIQAACEIAABCBQOQHf/7Pjqvq2lcNI3ICpEIw33XRTmyhRoywSjVpcJUVnXyQfVvWlGqVg1HYnvkxF/CXw/UhkigsTpdhusAkCEIAABCAAAQikSMD3/+y4qr5tinxSsmkqBKOAxwumqGFq9dS6OPsi+bCqL9UoBaOEu5XpsMMOK6yO4447rhVvm222CStXriyMywUIQAACEIAABCAAgbQJWP/Ph1X1bdMmVb11UyMYNXLl91lU49TqqRIidXD+y2THVX2pRiUYy2ynoThHHHFEax9Nba2h7U9wEIAABCAAAQhAAAL1JWD9WR9W1betL8XJWD41glE4165d27aXnxpoXUSj/zLZcVVfqlEJxng7Db2jeM4552ReU1XnzJkTZsyYkdWRRh/1fumGDRsm880gFwhAAAIQgAAEIACBsRGw/qwPq+rbjq2QDUl4qgSj6qxINF5wwQVJV6n/MtlxVV+qUQlGjRxaWbSdhlZBlZeIt70zTzrppPDQQw8FjRAXvd+YdMVhHAQgAAEIQAACEIBABwHrA/pwXH1b9fN7ba3XYSAnWgSmTjCq5JrSqPfgfAPVgipFK6e2aFV44G2143F9qXoVc1SC0S9iI2FootCvanv44Yf3MofrEIAABCAAAQhAAAI1I2D9WR+Oo2+r15vU59RABW4wAlMpGIUqTzSm/M+D/zLZ8Ti+VGWa0SgE45IlS9oEu1ayNaepp1ZGjTSyfYaRIYQABCAAAQhAAALNIGB9PR+Oo29rM9q0DgZuMAKNFYxlxJ/2/PONVO/Qpeq8nXY8ji9VmfKPQjCefPLJLfYa7fVO04atjApTny7sbecYAhCAAAQgAAEIQKA3Ad/Xs+NR921vvvnm1mtOygM3GIHGkttll116EtEUSD/98eCDDy68p+rpqvZF8uGov1SFhY8ujEIwqn6sLHnbafgVbcvUZWQiHyEAAQhAAAIQEwsgGAAAIABJREFUgAAEEiZg/UAfjrpv6/ubymf58uUJE0nXtMYKRjWKMm7BggUt4dJthFH7BVbp/JfJjkf9pSpbvmEFY7ydhqagxi4e/e3nC161uI/LwmcIQAACEIAABCAAgXYC1p/14Sj7tlp5P16zZPHixe1G8KkUgXKqqlRS6UTS8LMan96T6+UsruJrm4c8p70aq5737L9MdjzKL1VeuYvODSsYNcXUyqAw7x1Fjf76L7m22SjjFE91ioMABCAAAQhAAAIQSJeA7wva8aj6trbQzbx589q21NNq/Lj+CTRSMC5cuDATJBJ6vZzFlTjRu3PeSXhoL0AtvCLBqM9VjV7ZF8mHo/pS+TKXOR5WMNrLxypLNyGulVOtvFrdqpfTe6uqq6rqqJd9XIcABCAAAQhAAAIQeIGA9fF8OKq+rfqa2rJNfXutuG95HHLIIeAfgEAjBaMtqCLx0G20ScLC5jZramrs1KgkaNTI1Oj0uZ+pkXF6w3y2hu7DUX2p+rVrGMEo5qoXK4f++SlyWsnW4iksGjHWv0h6MFi6qvO8UcuifDgPAQhAAAIQgAAEIDBZAr6PZ8ej6NuqH6g+ob3ypFFFS1/9eVz/BBopGLV4jTUMjUzlCQ0JF72XqHhaYCVvU3idk6BRHG37kBenf+SD3WHl8eEovlSDWDOMYNQooC9DN0Ev20yw656if4U0DdXqXCPFilc0vXiQ8nIPBCAAAQhAAAIQgMBoCfj+oB2Pom+r/r36j9Zv13uLlr5CXP8EGklNouH2228PK1asCBo51Ciippbq5Vd5TVWdOXNm9o6crluDysOnRqf0qna+odvxKL5Ug5RrUMHoRbqVodfLx35RIv1blLddiupPgl5pSuDrc7c6HaTM3AMBCEAAAhCAAAQgMDoC1hf04bB9W1snw+/vrdmBPo9egxWjK2FzUmqkYLQhaFWThMOGDRuCpjfqfUUNS8urIel8N2EhgaMGlrftw6SbgG/odjzsl2rQMvQrGDU9VKOAs2fPbvvCqhwaAZaAzxOCsm/lypVt90g0avpp/GVX+kpP9YyDAAQgAAEIQAACEEibgPVnfThM31b9dvUr8/rtPg9mofXfLhopGItEoM6bL4PKhrDz3m8sc/8o4/iGbsfDfKmGsW0QwSiRrtG/PK9r3b68+gPA36f4sWBMZSR4GK7cCwEIQAACEIAABKaFgPVnfThM31YDEEpLgw2x8/uuqx+J649AIwVjfwiKY9viOVUtdOMt818mOx7mS+XT7ve4X8Go9E2odwuL7Mi7x8dNaSTY28UxBCAAAQhAAAIQgEA+AevP+nDQvq0GEjQLTSvs5znttW75FK2JkXcf514ggGDs0hL07mMKqyk98sgjrUZujV3h1772tS7Wj+/SIIJxfNaEkNJI8DjLSdoQgAAEIAABCECgKQR8n9aOBxWMtpClQgnC2NuuCMonhbVJ6laHCMaCGtN7d2pUWkFVrugdu4LbR3p61apVuYLxhhtuGGk+ZRNLTTB2m4JQtkzEgwAEIAABCEAAAhCYHAETiT4cRDDaQjeWjkYaY2/XLNTsNFx5AgjGAlZ6p06NSqGmpFY53xnBWFBJvzrtR4I1JYE9GLvz4ioEIAABCEAAAhComoCJNx/2KxhtoRvNCFy6dGmhX7RoUdvgS7wWRtUsUs8fwVhQQzbXWSuryus9uqocgrGYvB8J1kNDo404CEAAAhCAAAQgAIG0CXihaMf9CkabZSZB2M2pH295KFTfvptTn5IBiI2EEIwbWbQdaWRRKyppZLFKsSijEIxtVdP2wUaCJfA1JWHt2rVt1/kAAQhAAAIQgAAEIJAeAS/g7LgfwWgL3ai/Xqav7ldK1cKW3ZzegWQUciMhBONGFm1Hanjm2y5U8AHBWAxd04X1AJBgRCwWc+IKBCAAAQhAAAIQSImAiUQf9iMYbaEb7a1extnsQeV38MEH595is9X0DiRuIwEE40YWyR4hGLtXTSrCvruVXIUABCAAAQhAAAIQMAJeKNpxWcFoC93Y4pSWZrdQswYtn7yVUpcsWRJmz56dLZijeOeccw6jjL8CimDs1rISuYZgTKQiMAMCEIAABCAAAQhAYCQETLz5sIxg1OyybbfdNhN/ZUcXZbCmmPq84n3W9ZqT9nFUnL322it7LY1pqS9UNYJxJE1+vIkgGMfLl9QhAAEIQAACEIAABCZLwIs3O+4lGLUQjU1F1T3ai7usO+KII9oEY7xQomas+b299Rn3AgEEYw1aAoKxBpWEiRCAAAQgAAEIQAACpQmYSPRhN8GoPdFnzJjRJvr0rqEWqIlHC80Indd13ae4Pi99njlzZjjllFMserbavuKsXLmydY6DEBCMNWgFCMYaVBImQgACEIAABCAAAQiUJuDFmx13E4waXczba1HTUrVYTZ7TeV3Pu0/ndG3ZsmWtW/3e3q2THCAY69AGEIx1qCVshAAEIAABCEAAAhAoS8BEog+7Ccay6Q4az+/tPWgaTb2PEcYa1CyCsQaVhIkQgAAEIAABCEAAAqUJeKFox1UKRtvbWyGunQCCsZ1Hkp8QjElWC0ZBAAIQgAAEIAABCAxIwESiD6sUjLYoDvt6d1YogrGTSXJnEIzJVQkGQQACEIAABCAAAQgMQcALRTuuUjBqqw5tpyGnfR41RRX3AgEEYw1aAoKxBpWEiRCAAAQgAAEIQAACpQmYSPRhVYJRq6nKDu3DKKF49tlnly7HNEREMNaglhGMNagkTJwaAtqXSX7Dhg2ZX79+fXj22Wcz/8wzzwTvn3766YCHAW2ANuDbgD0j7Lmh8Lnnnsu8PVem5oFKQaeagBeKdlyVYFy4cGEmGA8++OBsdJE9GNubJoKxnUeSnxCMSVbLVBmlZalvvvnmnn4aoCAY6fz7zj/HtId+2wCCcRp+KShjGQImEn1YlWDUe4vz5s0Ll1xySfancBn7pykOgrEGtY1grEElNdxEv/GtNr81r/n+2hB37733bp2bM2dOWLJkSSOISBzaP/9PPfVUkH/88ccz/4tf/CLIP/TQQ+GnP/1p5r/3ve+Fa665JnzlK1/Bw4A2QBvo2ga++tWvhu9+97vhvvvuCz//+c8zv27duiD/y1/+MvM2CqmRRxwEmkbAC0U7rkowiq39Idw0zqMoD4JxFBTHnAaCccyAK0h+zZo1meioIOuBs9SDVOLIHuoKFy1a1JqeuWLFinDYYYdl1zfddNNwyimnDJxX1Td6kfjoo48G+QcffDDzerdB/sYbbwxnnHFG2HfffcMrXvGKlmBW2fEwoA3QBsq0Af/n26677hqOOeaYoCX99Rshb0LyySefDJr+Lk+ntupfCPIfFQHfn7DjKgXjqMrVxHQQjDWoVQRjDSqpTxO333778K53vavPu6qPrqmp9lBXqI6Ld5rSYdfVWarrCmPdBKOmq/zRH/1RSxRaeQk3adU9LGBBGxisDZjI1J9Qf/d3f5f9SSXRiGD0vzQcN4VA3nMCwZhm7SIY06yXNqsQjG04GvFBD8n999+/dmVZvHhxSxRoNDHPaUlq+xHQstR1cPaPvf2D/8QTTwR5TTW96667Mv+lL30p7LPPPplQtPIRDtYphhvcaAO924DEo/5c/Id/+Idw7733ZlNVNV3V3oFkgZw6/LpgYzcCec8BBGM3YtVdQzBWx750zgjG0qhqE1EPyToKxuOOO64lBjXSlue0wpj9CNRlWepegvFDH/pQ2GyzzVrlsvIp3GqrrcJb3vKW8L73vS+ceuqp4fzzzw8XXnghHga0AdpAzzagZ8bxxx8fjjzyyLDjjjvmPmMkHF//+tdn7zoiGPN+dThXVwL+t9SOEYxp1iaCMc16abMKwdiGoxEf9GCso2DUIjf2UC+abqrOjcXRyqp1cLawxGOPPRbk77///sxrEZs3v/nNuaOKb3zjG8MXv/jF7F9/E5yEL2w5Agc40Ab6bwMaMfzOd74TTjzxxPCyl72s9Ry15+kuu+wS/vM//zNbbEsLbtlIo7Guw7MWGyHgCVjb9iGC0RNK5xjBmE5dFFqCYCxEU9sLs2bNCkcffXSt7LdNbfVg17TTPHfZZZe1OjlFcfLuq/pckWA86KCDOsTinnvuGa6//noWnqi60sgfAg0lIAH4wAMPhL/927/tmNmgKap6FiMYG1r5U1YsLxTtGMGYZiNAMKZZL21WIRjbcDTigxYwUKegTu6cc85piUHtVRQ7jSbaCOQ222wTli1bFkdJ7rMtbmPvLNrKhP/1X/8VNP3Wj5bqx0zTTvWvft3qLjnwGAQBCPQkoOeM/pzabrvtWs9ePYf22GOP8IMf/CA88sgjmbc/vHomSAQIJEbARKIPEYyJVdKvzEEwplkvbVYhGNtw8KEiAtpr0R7qCxcuDBKI8lrYRnsxanl4CcVjjz02236jIjP7yrZIMH7yk5/sEIuf+MQnEIp90SUyBCAwLAGJRi28tdtuu7Wev3oOa6o8gnFYutxfNQHrU/gQwVh1reTnj2DM55LUWQRjUtUxEmN23333cMIJJ4wkrUkkovcV7YEuUSiBqPdpJBI1CqeFbrQno/ZprMvom+x86qmnMq/VUOU1Kqp3iF7+8pe3yqtya0/JupRrEu2BPCAAgckS+NGPfhS23nrr1nNJz129Q7169epsT9/HH3886B1IHATqRMD6FT5EMKZZgwjGNOulzSoEYxuORnzQw7FOi95oI2l7oGsEUeJJ4srO6X3FugkqbaGh94Dk77zzzsxfc8012QiplUuhVkDVlC8cBCAAgSoJLFmypPXM1bNp9uzZQdPn7Q+vp59+ukrzyBsCfRPwv7V2jGDsG+NEbkAwTgTzcJkgGIfjl+LdejDWSTAeccQRrY6KRhLN7bzzzq3zRaumWtzUwjzBeNVVV7Xew1QdacuMe+65JzXTsQcCEJhCAvpT7vDDD289c/WMOuOMMxCMU9gWmlJkE4k+RDCmWbsIxjTrpc0qBGMbjkZ80MOxToLRL/7iRxIXLFjQ6rxo2madnKajaiVC+VtvvTXzGj31P1xapRAHAQhAIBUCP/7xj9tWTn31q18dVq5cmXltCaRpqUxNTaW2sKMXAf97a8cIxl7UqrmOYKyGe1+5qjNrXyQfapW0Kpy2gzA7JGZx/RPQqnfvfOc7+7+xgjv8NKjDDjuszQJ1VKwt6J3GSTm9O6n3KIdxeYLxda97Xas8KteDDz44TBbcCwEIQGDkBOJRxm9+85sIxpFTJsFJELD+gw8RjJMg338eCMb+mU38jhtuuKGtE2tfrKrEGoJx+Cbwwx/+sDYriWp7CWtzl1xySUfhJSLtulZNHafT1h4Si8pPC+0M4uwf+HXr1mWrD2oFQoniK664Imy55ZatsvzRH/3RIMlzDwQgAIGxEtBiN/bMVXjaaadlezOuXbs2e9+ad67Hip/ER0jAt2M7RjCOEPAIk0IwjhDmuJJCMI6LLOmWIaCRQ3uQ572nKBFp1yUux+k0HVaCT+9OjlowfuxjH2uVQ+XRNho4CEAAAqkR0EJdm222Wet5pYW5li9fHhCMqdUU9vQiYH0HHyIYe1Gr5jqCsRrufeWKYOwLVy0iv+td7wrnnntu8raqE2IPcq2Emuck4rTVhuJtu+224eGHH86Llp3rJigvu+yy0tNMhxGMWuxGXp0rlU9eeX/gAx9olVVl0fcOBwEIQCBFArNmzWo9r37/938/3HLLLdn72FopldVSU6wxbMojYP0LHyIY80hVfw7BWH0d9LQAwdgTUe0iaBGZAw44IHm7/XTUk046qdBev1iMpo3mOU0lPfvss/MuZee0dUfZUcNxCMb3vve9rQ6Yfryqeke4EBAXIAABCPyKgBZNs072TjvthGCkZdSSgLVhHyIY06xKBGOa9dJmFYKxDUcjPujhmPIqqRp106qnfnVUjR5qJE7XYuf3ZNQ9Eo0Wz793qGvaOyzPTUow6v0e+Z/85Cfh29/+duYvvPDC8OY3v7nVAVP96DoOAhCAQIoE/FZHO+ywQ7juuuuC1jV48sknM5+izdgEgZiAF4p2jGCMKaXxGcGYRj10tQLB2BVPLS+mLhglFjXad9BBB7W8Pmtl0qLtM7T4jcX38TRl1d5ztAVn8iptUoLRpmytWbMmaJEeeW0P8od/+IdtgjHPRs5BAAIQSIGAX3zuN3/zN8PXv/71bKXUJ554IsjjIFAHAiYSfYhgTLPmEIxp1kubVQjGNhyN+KAN4YfdFmKcICTyuvm8vPPiW7y5c+dmYtI+K5RQ0zRV8xp9lLfPCoveeRxmSiqC0dcCxxCAQB0JIBjrWGvYHBPwQtGOEYwxpTQ+IxjTqIeuViAYu+Kp5cWvfvWr4fbbb6+l7YMYrZVW582b13GrjTgq1CikRij9OYnQPDeMYNT+i/L33XdfuPHGGzP/T//0T2GfffZhhDEPNucgAIHkCMSCcdGiRdk2QY899liQx0GgDgRMJPoQwZhmzSEY06yXNqsQjG04GvGhSAg1onBRIbQVh34Meu3ROKkpqQjGqIL4CAEI1I4AgrF2VYbBOQS8ULRjBGMOqAROIRgTqIReJiAYexGq3/XTTz892yh+lJZLCKXoJAT1QyCnBW/y9nLUtUkLxnvvvTfbOkPfr/PPP58RxhQbDzZBAAK5BLxg3H777cPVV18d7rzzzrBu3brM597ESQgkRsBEog8RjIlV0q/MQTCmWS9tViEY23A04oPe1RvFthpaFU/iU8uqq52k6LSVhn4M9E7iypUrC00sIxjjFVeVZtE2HkUZ2QgjgrGIEOchAIHUCSAYU68h7CtDwAtFO0YwliE3+TgIxskz7ztHBGPfyJK/QQ/GQbfVkOC5+OKLw4EHHpiJMIlPpadV8lJ0mn5r7yV2s88WzekVx9KysN/pvXmC8bzzzmOEsRt4rkEAAkkRiAXjVVddxQhjUjWEMWUImEj0IYKxDLnJx0EwTp553zkiGPtGlvwNgwjGW2+9NWi1Ue2HaCuK8pDtv6oRjP0z4w4IQCAtAgjGtOoDawYj4PswdoxgHIzluO9CMI6b8AjSRzCOAGJiSWy22WbhTW96U0+rtHn8ueeeG17zmte0jSbag9WHPGR74swi/PKXvwzyms57/fXXZ37+/Plhzpw52UitMS2XGrEgAAEITJ5ALBivvPLKcMcdd7TeYSwzY2PyVpMjBNoJ2O+tD+nLtDNK5ROCMZWa6GJHyoJRYuaiiy7Cl2BwzTXXhPXr12c1fdZZZwVNIcpzinPFFVeEt7/97WGLLbbIRhP9w7ToOLWHrATZKPyyZctai9PouzCMV1v9zGc+k3m13RNPPDHzf/Znf5a9B+rZ5tUN5yAAAQikQADBmEItYMOwBPxvrh2n1pcZtoxNuR/BWIOaTFkw2tRIwhc2ne/FQSJFLu/fX/07/Pd///fh5S9/ee6UU3uYFoW/8Ru/kYkeLYAzqNd0Vy0kk5rvxXUU12OuNXg0YCIEIDClBBCMU1rxDSt2/LurzwjGNCsZwZhmvbRZlZpg/OIXv9g2dS/vC8+5TXIZLViwIKvbyy+/PNx2223ZsaadHnrooT2nnMI0n+m4uLR9CfkAAQhAICECCMaEKgNTBiaQ9/uNYBwY51hvRDCOFe9oEk9NMGp07Hvf+17r/S97D4zwhffhijiIma3o+eIXvzgTidZCHnvssSAh/sY3vjFXaOY9VONz2223XZg1a1ah33fffcN+++03tD/44IPDUUcdNRKvKaGnnnrq0F5TfC+88MJSfuHChUFe23FoRFf+3e9+N1NSrTESQgACyRNAMCZfRRhYgkDcj9FnBGMJcBVEQTBWAL3fLFMTjGa/TaskfL41xbQbC+OmUA/FeFsNu3f16tVBAuhVr3pVX+JRgsm2migKLY9pDp988skgf88994Trrrsu8yx641snxxCAQOoEEIyp1xD2lSGAYCxDKY04CMY06qGrFakKxq5Gc7ErgTzB6G+QoJPo06jk3/7t34Yddtihp3jkXzlPsPiYVVKL2XAFAhCoBwEEYz3qCSu7E0AwdueT0lUEY0q1UWALgrEATI1P9xKMvmgSj88880y2qurhhx8ettpqq1zxiGD01IqPEYzFbLgCAQjUgwCCsR71hJXdCSAYu/NJ6SqCMaXaKLAFwVgApsanjznmmGxrh36KYNNI161bl92r9xH9wxbBWI4mgrEcJ2JBAALpEkAwpls3WFaegO/D2DF9mfL8JhkTwThJ2gPmhWAcEFzCt5n4G9REu1/vO55xxhlhl1124UXxkjBTF4w333xzGIe/++67SxJKP9ry5cvbGD388MPpG42FEBghAQTjCGGSVGUETCT6EMFYWXV0zRjB2BVPGhcRjGnUwyit0Gb02k5jFE7iUe87aqVVXG8CqQvGce2BqVVhm+IOOeSQtr1CJbBxEJgmAgjGaart5pbVC0U7RjCmWd8IxjTrpc0qBGMbjkZ82GabbcLb3/72RpSlboVIXTDaj+aow7PPPrtuVVVor7Z28XwQjIWouDACAhrB9qP+KYzWIxhHULEkUTkB/xy3YwRj5dWSawCCMRdLWicRjGnVxyis0YMx3lZjFOmSRm8CdROMBx10UBiFv/TSS3vDqUkMBGNNKqohZkos+pH/FEbrEYwNaVxTXgwTiT5EMKbZKBCMadZLm1UIxjYcjfiAYKyuGusmGO191VGE1VEfbc4nn3xym4jWO404CIyLgASj79CmMFqPYBxXbZPuJAn475UdIxgnWQPl80IwlmdVWUwEY2Xox5YxgnFsaHsmXDfB2LNAUxghFs9TiIAiT5AAgnGCsMlqqgiYSPQhgjHNJoBgTLNe2qxCMLbhaMSHfffdN/zDP/xDI8pSt0IgGOtWY9gLgWoJIBjz+T/11FPh4osvDmvXrs2PwFkI9CDghaIdIxh7QKvoMoKxIvD9ZItg7IdWPeJqVVONktTNafGHCy64IBxxxBFBK1XK630ePyVQ11J2CMaUawfbIJAeAQRje53ceuut4a/+6q/CS1/60rDpppuGVatWtUfgEwRKEjCR6EMEY0l4E46GYJww8EGy05fHf5nseP369YMkxz0JEHjkkUdC3epPonDmzJlh7733Dpdcckm46aabMr9w4cJsH8jZs2cHeS1IkrJDMG6snbKrP/p9D8usEKk46mSb938obMy9+5HPU+n4vRa7XfOpWv4W+ms69uWPrxV9jstWFK/b+dj+vLjeNtnfjyuTvo9Tpk6Vv3FU6OujH9viuMOU09Lydqlcec7XW1Ecf5+Pr2ec/e4qnDt37lhY+Px7HU/6HcY1a9aEefPmhd133z1bAEhC0ZggGHvVFteLCFgb8iGCsYhWtecRjNXyL5V7kWAsdTORkiTw8pe/PLz73e9O0rY8o9R52nbbbcNhhx2W7fnoR0d1/NBDD4W99tor60BoQZKUHYJxY+2oo120+qOuabTYX7dj/XFw3HHHFYqGxYsXd9zXj8BQXMtLodqev7/sPow+DR2bk2CI07BreaHav8qrcsdp6vOcOXPCZZdd1mZjXjp2rihvlVEj9vrjpSifJUuWWDKFYVH63cqhPPNW/1S5VL4ie3S9Xyc7TjnllK7l7Cddb5vKbq5beXWP2neReBQLS9eLI3Vs9dmuKdR3ZdJuEoJRU05VD4ceemjYYostsnL7jr0dIxgnXfvNyc/akA8RjGnWL4IxzXppswrB2IajER/0cKzTthrqLMvmlStXFvLXiKPipL59A4JxYxWqo+t/qLX6o0SLCcW4o+zj6poEVF6HW38iaK9RH78fAaC4/t5jjz12o9EhZKPY/npRh93H0bGc7JUAjcvWlsGvPoiFhI1EQRw/TlvXJbqKbPHpx9uC6JqEoHiWyUffx24uL30x7VUO5W2CS2XX8Sjs8baaECuTblmevi5shkOZfGSDmOS1TX0XfLrdjsvUuWcwiuNxCkZNOdUoqn1PetUVgnEUNTqdaeR9rxCMabYFBGOa9dJmFYKxDUcjPughWSfBqI6DbO7lFCdPQPS6b5LXEYwbaauj63+wNTqsKce+g7jzzju3trCIRaDu3WWXXXJH1iTyfNr9vNuquP7eRYsWbTR6CMFoYtGnbcdtGfxqumrMQnHFwPbFFBu738IiAeLTjwWdBItnrrQsDxu5t/QVKq7eJS5yvdL3ZfDpWtoSyXHZe93TzR6zU0I3LqdPd1Cevgwqe5xPrzxkU/zc0h9fRXXgvxOKE99r5R1nOGrB2G3Kqeebd4xgHGdNNzvtvPaEYEyzznv3ANO0e6qsQjA2r7r1kKyTYLSHeq9/0tW5Td0hGDfWUCwYfWde049XrFjRmoKsUUMt1rRgwYIOkZQnFpYtW9YWT2lr1KqXUxxvhzrnfgq07o8FUVG7tHZroUSQHSv0Hf/Yrli0SnTo3V1bsMp4iNFJJ53Ulq7s7zZ1NLbfytstj1g46k+cIleUvsq7dOnStjJoOnks7s0eMVK+effofTbPsps9slMi1KcrW4p4qu35tHvxjONaPnm2q/5UHrH298V/aKh+zdvsCYuvsts1hVW4UQjGslNOrdxFIYKxihbQjDzz2hSCMc26RTCmWS9tViEY23A04sMOO+yQTfurS2H0L7oe7OoUdvs3vduU1VTKWjfBKDE0rC8SarFgtB9vCYiijrDOx6LRpjHGdSxRYGkq1LuNvVw8HVViLHaxICorGM2WPDHs84hH/FQOCatuTCR+LH2F+q4UcY/tV3wJGInsojy0dUHMs6jceelLPKkMeU55xmnLJt1TtGWC7omFZpE9Om8iztLtxTMWpEUj2SqP0ox9N9t1T94fGnlsdE72+/Q1XbVqN4xg7HfKqS973jGCserWUN/889oTgjHN+kQwplkvbVYhGNtwNOLDz3/+8/Dss8/WpizqXNk/8r1EY+qFqptg1BTHYX23jnz8gy0xVSRarG4lIvx9EgN5Lh556/XundKIR/by/oSIBVFR+bzZFPCVAAAdrUlEQVSNdmyjQ3n22jmJE4uvUCNMvZyYxSNjeYvIKJ3YfuWh71gvF/MsEi556edx9PnF4k829bpHU4U9pyJ7PE89R3qlK7vE0/6osjzy3jVUXLtuocRvkdD1ZZaotHsUFrWjJgjGYaacekZ5xwhG36o47odAXntCMPZDcHJxEYyTYz1wTgjGgdFx4wgJeNFY5j2tEWY90qTqJhjzflD7PVe2I6x0y3TmVSFxZz5vNE1peVt7TVuMp6OqQ5/nYkFUVD6ft45lcy8xHI9wFtmQZ1c8aiWhlOdi+yXWyrh4amTRisSDpB8v8iJWvVwZIaWpub4eJNjLuliQFo1k+/R1rNHeMi4exSwaAS9TzjL5jTJOmRFGPevUnnutchrz6/fzK1/5yrDTTjsN5V//+teHAw44YGz+wAMPDO9973vH6k844YRw+umn4/tgkNfWEIyjfFKMLi0E4+hYji0lBOPY0FaW8Ote97rsvafKDBgwYy8aNaqk95Lq5uomGNVxH9YXTSOOO8IamSnrYlFSJNriUZxu7/bFYq2o418277gzEi+ek1fWeISzyIa8e3UuntqZxz62v+zKwnF9KZ08N0j6sWAsI+xie/IErEaVfT2UGUm1Mknc+3uLRrJ9HB1rO40yLi5z0QhpXM6ieGXyHFWcboLx2muv7WuV05hfEz+r7eDTY5DX1hCMo3pKjDYdBONoeY4lNQTjWLBWmqgeknVa9MbDikVjmWmG/v6qj+smGNVpHtYXMY87wkUCJO/+WJQorTwXv+/Yrb3EYq1oWmHZvOPOSN4oaGyzRkH9fWXFh6Vz+OGHt92fJwbL2m9pWli2vgZJv6x4MlsUlrHHT0cV136d/izx9ZEnwP31fvIoW+a4nCkKxn//938PH/zgB8Nuu+3WEkYxFz53vusKk/SYIBj7fUpOJn7/T+/J2EUujgCC0cFoyKF+pOoqGFUFWrDCRo70r203EZBaldVNMI6TX9wRltAo68qKkvh9x6JpqRJmakvWgdP7gEWubN6WloVF6fnzFtdCf63McRkRUtb+OL+y9TVI+mXs7tceCXTjqFDPDJWhH2/PGUtH98bOrlkYXy/6XLbMMfcUBeOVV14Zrr/++nDaaaeFXXfdtY27cSFMTxxRJ5118q//+q9FX1nOV0gAwVgh/LJZIxjLkqpPPP1I1FkwirSEgHXm1NHP68ilWCMIxo21EneEJTTKun5ESbwYTN60VG3N4TtP3aaPls3bp6fjXi7mUeY9vjjNMiKkrP1x2rF9RfU1SPpl7O7Xnthe1UG/CzjFdag0YxfHia8XfS5b5rgcqQrGO+64Izz66KPhkUceCd/85jfDMcccE172spe1fa9iVnzuFCwwqY7J1ltvHe67776iryznKyTQ+xe0QuPI+gUCCMbmtQQ9FN/61rfWvmAaabR3topWhEytkAjGjTUSd4SLBMjGOzYe9SNK4i0n8kak/R6JWkmz2+I0ZfOOO34brc8/GoaHpVhGhJS139K0sKx9g6Rfxm6zw8Je9sTX4/oY5LPSjF2cTny96HPZMsflSFkwrlu3Lshrv8nnnnsuPPHEE+FLX/pStjLvZpttNhbx+P73vz+ceuqpQ/n3ve994aijjhqbP/LII8N+++03Vv/a1742zJo1Cz8ggze+8Y3huuuuK/q6cr5iAgjGiiugTPYIxjKU6hXnO9/5Tli9enXyRpcRgbbS4Dg7UVoM5UUvelG24t+w0BCMGwnGHWEJjbKuH1Ei8WfbsqhzH09Ljaej5u296O0qm3e/QmIYHmZfGRFS1n5L08Ky9g2Sfhm7zQ4Le9kTX1cbGMcCTv3Ws9lftsxxOcb5rDPbeoXdFr2RYLT3npWOjiUg77333nDWWWdl7znGzIb5fM8992TpK49u3uwoCrvdO6pr485bIr2sV5kU18Ky940yXgp5mw0KVT+4NAkgGNOslzarEIxtOBrxoQ4PRb1/pFGfXs46XePqROl9HE1jU4cmbwGRXvbF1xGMG4nEHWEJjbKuX1ES7/P3/e9/v5VVPB2110qaZfOOO8GtDAsOhuFhSWqlUJ9v3veirP2WpoVl7Rskffsem+15dpsdFvayJ74usahO4TA+79lpNlto9vUKy5Y5LkcZNr3yHvZ6P4LR5yV+Ehy33HLLyKassg+jJ8wxBJpJAMFYg3pFMNagkvo0UVPyPve5z/V512Sjq5OkdxN7rSw5d+7crIPcq5M/qPXq3CxduhTBOCjALvfFHWEJjbKuX1ES7yHot2Tx01E1xTlPFHi7yuZtAsJCn0becbxIi9p/v057BVp+ClXu2JW1P76vbH0Nkn5Z8eRt6mVPzFMjjONwnreOy7qyZY7LWWfB6NlIuD/55JNDT1lFMHqqHEOgmQTKP1mbWf5alArBWItq6stIdUS1SXHKTtNR1fl697vfXWimphJqeqEWNcnr5Ot6t2mt2lBa0017OeuwMcLYi1R/142rdbglNMq6fkWJ2oe976r8bFP7eDqqtuHo5crmbeWysFe6uu6nzuo+2dePi7flyPvDpaz9cb5l62uQ9MuKJ29TGXtinnk8fJrxsUaitYCL+fi6Plv9WpgXJ+9c2TLH5WyKYDQm+m5KPGrK6plnntn3lFUEo5EkhEBzCSAYa1C3CMYaVFKfJqpjk/oqqbYnnsStRGPccVYHbs6cOdlKqVr8Jnb23uHKlSvjS63P6tiWEYHWYSsTt5V4wQFTUjeCMa7W0VZ9lHWDiBK9m2h5KZQYiKejdmsvZlvZvH1eOi7j4hVdZV9Zp9VffZ5aRTjPlbU/vrdsfQ2Sflnx5G0qY0/Ms8wfRD6PmTNntlZW1XGe88zL1rPSKVvmuJxNE4yeqcRjv1NWEYyeIMcQaCaBcr+gzSx7bUqFYKxNVZU2VJ2a1AWjOrtXX311Nl1Jm5Fr0RkJRE25U6iRFAmAvM3V/XuHuq9olFEd2zIi0DpsZeL2qgQE40ZCxtU63KqPsm4QUSIxaHkp1LRUPx1V4qKMK5u3z0vHZVy8oquNhJa5V1PNfZ5Fo6Vl7Y/zLFtfg6RfVjx5m8rYo+1RPBM9P8o6iUt/r96DzXM+jo7LurJljsvZZMFo7GzUscwqqwhGo0YIgeYSKP9kbS6D5EuGYEy+ivo2UMubv+lNb+r7vknecPvtt2fTTK3jYO8S6p0svVOoz5rGlOd0TQtcqIOnY6WR59SxLSMCrcNWJm5ePv4cgnEjDeNqHW7VR1k3iChR2lp63vKTGNMItn2WWCvjyuZt6VpYJm21VT91VveWGWXUaKkvi6Zi5v2ZIhvK2h/bW7a+Bkm/rHjyNpWxJ+YpRnn7cPp0dawZDRpRtLpTWDT67OPouKwrW+a4nHpvu2o36KI3g9htvwFFU1YRjINQ5R4I1ItA+SdrvcrVKGsRjI2qzqww//Iv/5KtUpdyyfJEns6Z72W7OoaxADj++OOzkUqNOsorjlZAtc8KNdU1dtZhm0bBaO9uDRv6VUmNr3G1DreERlk3iChR2hp1s/x8KIGV1+by7Cmbt09fx2WdRtb9vWqj3aZSim0sbj796U8XZlfW/jiBsvU1SPplxZO3qaw98YJHmp3QTTSK5+zZs9vq4KMf/ajPuu3Y11U/9Vy2zPHiPXpuadaEfSf7fS+zzfgBP0xSMHoT9R21Pw+1d6L2FEYwekIcQ6CZBMr/gjaz/LUoFYKxFtXUl5FlRVdfiXaJrJGOiy++uEuM0V5SR0odt3hEQCOS6myY1yikNpW2zwrzRIN1TKdRMEqsjMJrgaHYGVfrcEtolHWDiBKlHU9LtbyLphvm2VM2b0vbwry08s6pDcbbgKgOxFCjjSYUJCL1J4iuWR4KdW9eO7a8ytpv8S0sW1+DpF9WPJktCsvao7j6nntGYqbpqeKpdMS0iKeeE914+nR1XNb1U2bZ4PPx30nZP2lXlWC0cqo+5DVl9amnnrLThBCAQEMJlH+yNhRAHYqFYKxDLfVn46c+9alw7bXX9ndTn7HXr18frrjiivCnf/qnYcsttww77bRTnykMHl3/vmtaXy+njm0ZEWgd0zJxe+VZtympvpM6zLFYx864Wrp5ceJ77PMgosTujRdCUf55209Y/Dgsm7eVy8I4nW6f9edGLBptRNzEgj7LW/oKVbaiqaiWX1n7Lb6FZetrkPT7EU/92qP4Ehfx+6ExT3HN45m3qJbZoNDz13FZ10+ZNUU/XvHV8p1GwViWMfEgAIFmECj/ZG1GeWtZCgRjLautq9FbbLFFOPDAA7vGGfTiHXfcET74wQ+GV7ziFdnIh3XAZs2aNWiSfd+nkQN1tjVqkDeyZQmqY1tGBFpHuUxcS7soTF0waiRjHF4LFMVu+fLlbXlp0/myTnG9nUqrrNNCKP7esovdWPpl8/Z56LhfJ5EjW+N3Gk0o+FBxJIiK3uv1eZe139+j47i+8upU8XTel71M3ei75e8p812L7enVfsRTwkv5eHZ5x+KpUckyPL3dOi7r+imzbF+xYkXQ1Ng4vzJ8y9pUNl7VI4xl7SQeBCDQDAIIxhrUI4KxBpXUp4nqII1yldTHH388LFy4MLzuda9rvRsYd8ImKRjV2ZNQlXDs1uErIxj1LpNGHlQeG5GIt/joB3/qglEd00G8OHfzSjPPxXnlxck7F99XlP6o71V6cd55eeic59GPfT493ad0JBznzZvXJhYkdCUgbBGosnmUtd/boWO7T/aYXXEcffbX7Z68eP6cxbN79bmMs/gW9rrH8pFwFE8xNAGmY50Tz7LpKT+La2EvG+y62aL7yt7r79Gx7qvCIRiroE6eEJheAgjGGtQ9grEGldSniaMSjDfeeGP48z//8/Brv/ZrbaOJsVjU50kKRnsnsVdnqkwnzdLyYZ+426KnLhjbjOVDMgS8UFC7NW/nkzG0JoYYN3sGwLO/ikMw9seL2BCAwHAEEIzD8ZvI3QjGiWCeaCbDCMY1a9aE008/PXsnUSNuNuU0TyT6c5MUjBOF2WdmCMY+gREdAhBIjgCCMbkqwSAINJoAgrEG1YtgrEEl9WniiSee2LHlRLcktAqdVhDU1C3bjsKLwTLHCMYXCCMYu7U0rkEAAnUggGCsQy1hIwSaQwDBWIO6LBKMek8MXx8Gr3nNa8Lf/d3fBb1vaNOwejW///qv/wrHHXdctsdb3gqCZYSixdluu+3CRRddNJD/yEc+ko1qamRzUP8Xf/EXYRivqbdveMMbhvJ6x1Orxcq/8pWvDC9/+cszrz30ttpqq7aFOLTKLA4CEIBAigQQjCnWCjZBoLkEEIw1qNsiwWhCgHCTto5+qjxs+uhf//VfB717qBX38pyW5J8/f36QwBxWJMYslN6g3uyvexgzKfqslWZxEIAABFIkgGBMsVawCQLNJYBgrEHdfvazn62FICrqeHO+XdBqL6+XvOQl4c1vfnOr9Wk0S6swapXAMgvYwLSd6Th4aHVWHAQgAIEUCSAYU6wVbIJAcwkgGGtQt//v//0/BOMm4xcI4xAdeWnqXUKdt201tIiNpkqOejQxL2/OlW9He+65Zw2eDpgIAQhMIwEE4zTWOmWGQHUEEIzVsS+ds5Yf32+//RCNDRGNn/nMZ9oEoxqC3mn8j//4j3DkkUeGrbfemrpOoK6//OUvl/6OEhECEIDAJAkgGCdJm7wgAAEEY03awLp168J5550XTj31VHyNGWhDav0B4EcYrQnqvITjo48+Gj7/+c8n+SfBDjvskO3nqFHSYf0+++yTlVF/hgzrDz/88HDUUUeV8u95z3uC/Dve8Y5w8MEHZ37vvfcOWhTIj8CqPnAQgAAEUiSAYEyxVrAJAs0lgGCsUd2aoLANjgk3bp5dFxYmQiRUzjjjjMLWZ3V97733hg996ENhl112aRMzXtiUPZbAe+6555LxVdXZE088EeTvvvvucN1112X+k5/8ZJgzZ04b48LK4QIEIACBigkgGCuuALKHwJQRQDBOWYVT3DQISCyZeOxlkeJJ6N1yyy3hmGOO6RgJ60cw9sprGq6zD+M01DJlhECzCSAYm12/lA4CqRFAMKZWI9gzFQS00I32Y+zH2aijRsf+7d/+LbztbW8Lm222WduoWDfxqBFGXAgIRloBBCBQdwIIxrrXIPZDoF4EEIz1qi+sbQiB7bffPttCY9DimHj86U9/Gj7+8Y+H1772tT2FoxbTwSEYaQMQgED9CSAY61+HlAACdSKAYKxTbWFrYwhoJNC21Ri2UCYev//974cTTzwx7LjjjoXicdi8mnA/I4xNqEXKAIHpJoBgnO76p/QQmDQBBOOkiZMfBELIBN2oBKMBNeH49NNPt7bo2GqrrdrEo8Wd5hDBOM21T9kh0AwCCMZm1COlgEBdCCAY61JT2NkoAqMcYcwDY+JRW3T8y7/8S7Z1hN53xG2ckqoVaK+//vrMz58/n1VSaRwQgEBtCMSC8aqrrgp33HFH0BZc8voNkMdBAAIQGAUBBOMoKJIGBPokoAVo9IM/CWfiUe874hCMtAEIQKD+BBCM9a9DSgCBOhFAMNaptrC1MQS00qm21sBNnsBTTz0V5DXCeMMNN2T+vPPOC/vssw/TdydfHeQIAQgMQCBPMN55552tEcYBkuQWCEAAAoUEEIyFaLgAAQg0kUCRYPzDP/zDNsG4fv36JhafMkEAAg0ggGBsQCVSBAjUiACCsUaVhanNIbDHHnuE97///c0pUI1KkicYzz///HDooYe2CcZVq1bVqFSYCgEITBOBt771ra3nlbZpuvrqqwMjjNPUAigrBCZLAME4Wd7kBoGMgBa9OeCAA6BRAQETjPfdd19YunRp5j/1qU+F97znPa0OmOrn1ltvrcA6soQABCDQm8Cee+7Zel7ttttuYfHixeGuu+4Kjz32WOZ7p0AMCEAAAuUJIBjLsyImBEZGYNyrpI7M0AYmVCQYjzvuuFYHTPXz9a9/vYGlp0gQgEATCGy33Xat59VrX/taBGMTKpUyQCBhAgjGhCsH05pLAMFYXd1qn0r5+++/P9xyyy2Zv+CCC8Jpp53W6oCpfj7wgQ9UZyQ5QwACECggsHz58rZn1f/6X/8r23t35cqVQQuqyeMgAAEIjJIAgnGUNEkLAiUJ6N/hd77znSVjE22UBIoEo0Tjy172slZH7Ld+67dGmS1pQQACEBgJgY985COt55T+3NLn//iP/wgIxpHgJREIQCCHAIIxBwqnIDBuAj/84Q/Dz372s3FnQ/o5BJ599tkgr30pv/vd72b+oosuCvL77rtvW0fs2muvzUmBUxCAAASqI/DqV7+69ZzacsstwzXXXJNtD7R69erwy1/+MvPVWUfOEIBAEwkgGJtYq5QJAhAoJNBNMP75n/95qyOmf+7nzJlTmA4XIAABCEyagP7Y2nTTTVvPqb322qu1nyyCcdK1QX4QmB4CCMbpqWtKmhCBP/uzPwvz589PyKLpMeW5554L8j//+c/DD37wg8xffvnlQf6MM84Iv/M7v9PqjKljpvM4CEAAAlUT0IJds2fPbj2fZsyYEU4//fTw7W9/O/MPPvhgeOaZZzJfta3kDwEINIsAgrFZ9UlpakJAQoRtNaqprF6C8S//8i9bHTKNMmqPsxUrVlRjLLlCAAIQ+BWBv/iLv2h7Nv3xH/9x+MIXvoBgpIVAAAJjJ4BgHDtiMoBAJwEJkf3337/zAmfGTmDDhg1B/vHHH88WidBCEf/5n/+Z+fPPPz+cd955bf/iq6523333LP7YjSMDCEAAAjkE5s2b1zYVdYsttgif//znw1e/+tXWTIlf/OIXYf369ZnPSYJTEIAABAYmgGAcGB03QmBwAgjGwdmN6k6tlqqFb+Rvu+22zF966aVB/oMf/GDYdttt2/7N/4M/+P/t3ctLW08UwPFFEYsFNxYUH6h04/qna8GNqLgT9B8QQcGKQle+KBUFF3YjPruub3yBpSKCSnctvnXRjW0VwWcVbV15fpyREcXYJN7ExJtv4BAxyb25nztM5tyZO/Of/Pr1K1C7ZzsIIICATwI67FSHn+rvho3y8nKZm5szsbW1JRq6nIa9IObThnkTAggg4KMACaOPULwNgUAKPH/+XPLy8gK5Sbblp8C/Esa3b99KRUWFREVFXTfQdBhxYmKimZHQz13xdgQQQMBvgePjYykuLr7Vs6gJY2FhofT09JAw+i3KBxBA4KECJIwPleNzCDgQGBkZkW/fvjnYAh91KqD3Mp6cnJjQYakas7OzJnp7e02DrKio6FbSqI01vdKvjTjua3R6Bvg8Agh4EtBhpe3t7RIXF3cnWczMzJSJiQmZnJyUtbU1EwcHB6KhM0DzQAABBIIhQMIYDFW2iYAXgcvLS9HgEToBbwmjJo21tbXmav6LFy+uexo1adTexpiYGMnNzTUNu83NTe4bCt2pZM8IPHmB3d1dGRsbk7KyMklNTb0zBFXrnfz8fDO7tiaLJIxP/pRzAAg8KQESxid1uviybhHQe1K0ccAjdAKasNsp6Pf390VjfX3dxKdPn0Sju7vbJI1VVVWSkpJyK2m0iaP2OD579sxEUlKSvHr1isCAMkAZ8LkMREdHm/pD6xK9GKV1y83Q13WGVB2ZopPc6OgUjZ2dHRPn5+eiofcv8kAAAQSCIUDCGAxVtomAFwGW1fAC9Agv+5ow1tXViYZe+dcexdjY2FuNuZsNO/6+3dDFAw/KwMPLgF6IysrKksrKSnn//r1JFkkYH+HHgV0ggMAdARLGOyT8A4HgC5SUlEhbW1vwd8Qe/ilgZxT88+ePaNhZU1dXV81U9Z8/fzbrnOlaZy0tLdLc3Cw1NTVSUFAg6enpd+5vpHH88MYxdthRBq7KwMuXLyU7O1tev34tHR0d0tnZKePj4/L161cTP3/+FI3T01MTLKXxz2qeFxFAIAACJIwBQGQTCPgroIkK9zD6qxb49/ubMGrS+ObNGxM6rb3tddSJKHStxrS0NElISJD4+HgCA8oAZcBjGdA6wkZycrJkZGSILtuTk5MjpaWlUl1dLXrbgs7WrMkiCWPg6362iAAC/gmQMPrnxbsRQMCFAjoBjoa9F8j2NOpkNgsLCyZ0KJjGhw8fTGgPsUZTU5M0NjaaqK+vFw2dLIfAgDJAGfBUBuww94aGBtF49+6dtLa2mtDksKurSwYGBkzMzMyIxvLysmxvb5uwPYs6Kyozo7rwB4lDQiAMBUgYw/Ck8JUQQCA0ArbH8e/fv6JxeHgoP378MGGHqX758kU0dLiqhk5eNDQ0ZMI28vr6+kTj48ePBAaUAcqAKQO2Xujv7xeNwcFBE6OjozI1NWVifn5eNBYXF03YJX/29vbk7OzMhB2CqqNUGKkSmt8K9opApAmQMEbaGed4EUDgXgF/Esbp6WnR0HuLhoeHTdgGoG0Q2gYiz1cJNA44RHIZsPWCvbBkLzTpRSc7MzMJ473VMy8ggEAIBUgYQ4jPrhFAIDwFbOKow73sVX27OLYdFmav/G9sbJgJcrQHcmlpyWPY3gKer3pNcMAhksrAffXCysqKaGgd8v37dxN2QhvtUdQ4OTkxcXFxYZbN0LqJBwIIIPDYAiSMjy3O/hBAIOwFSBhJaCIpoeFYg1veSRjDvsrnCyKAgBcBEkYvQLyMAAKRLXAzedQeR3t/o+15/P37txwdHZmwvZD7+/tCYEAZoAx4KgO2nrD1xvHx8fUSGXaJHzuhja1/IrsW5ugRQCDUAiSMoT4D7B8BBMJawDbYbAOOhJEkwFMSwP8oF76WARLGsK7y+XIIIOBBgITRAwr/QgABBLwJ2BkKbUKpz3Z5Dp6vlinBAQfKwP1l4GbdYesTb/UOryOAAAKhECBhDIU6+0QAgScvYBt4Nxt9NI7vbxxjgw1l4HYZuFl32PrkyVeMHAACCLhSgITRlaeVg0IAAQQQQAABBBBAAAEEnAuQMDo3ZAsIIIAAAggggAACCCCAgCsFSBhdeVo5KAQQQAABBBBAAAEEEEDAuQAJo3NDtoAAAggggAACCCCAAAIIuFKAhNGVp5WDQgABBBBAAAEEEEAAAQScC5AwOjdkCwgggAACCCCAAAIIIICAKwVIGF15WjkoBBBAAAEEEEAAAQQQQMC5AAmjc0O2gAACCCCAAAIIIIAAAgi4UoCE0ZWnlYNCAAEEEEAAAQQQQAABBJwL/A8JwfuPOP9KzwAAAABJRU5ErkJggg==) From this diagram, we can expect that we will interact with a Gym environment by giving it an action as input, and receiving a next state and reward as output. It is then our job to implement some algorithm to learn the policy $\pi(a|s)$ for our agent to use to act within the environment. Setting up visualizationFirst, let's get plotting for Gym working in Colab. This will help give us an intuitive feel for how to work with the Gym environments. ###Code def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag) ###Output _____no_output_____ ###Markdown Before we get to implementing algorithms for our agents to learn a good policy, let's visualize an agent acting according to a random policy. At this point the visualizatation is just to give us an idea of what what an environment looks like, and later on we'll come back to see how we generate this video. ###Code def create_random_policy_video(env, filename, num_episodes=5, fps=30): """Generates a visualization of an agent acting according to a random policy in the given environment.""" display = Display(visible=0, size=(400, 300)) display.start() filename = filename + ".mp4" with imageio.get_writer(filename, fps=fps) as video: for _ in range(num_episodes): done = False observation = env.reset() video.append_data(env.render(mode='rgb_array')) while not done: action = env.action_space.sample() observation, reward, done, info = env.step(action) video.append_data(env.render(mode='rgb_array')) display.stop() return embed_mp4(filename) env = gym.make("MsPacman-v0") create_random_policy_video(env, "video", num_episodes=1) ###Output WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned. ###Markdown This is pretty cool! In this example Gym gives us a great visualization of our agent playing PacMan (though not very well). Gym basics EnvironmentsAs we would expect, an environment is defined by its state and action spaces.First, of all there are two types of Gym [spaces](https://gym.openai.com/docs/spaces):- **`observation_space`**: defines the state space of the environment- **`action_space`**: defines the action space of the environmentSpaces can be: - `gym.spaces.Discrete`: fixed range of n values - `gym.spaces.Box`: n-dimensional boxYou can inspect the valid range for a `Box` by calling `Box.low()` and `Box.high()`. ###Code env = gym.make("MsPacman-v0") print(f'Action space: {env.action_space}') print(f'Observation space: {env.observation_space}') ###Output Action space: Discrete(9) Observation space: Box(210, 160, 3) ###Markdown We can see here that for `MsPacman-v0`, the `action_space` consits 9 possible actions, and the `observation_space` is a 210 x 160 x 3 box (rgb-image). We can also extract these dimensions using `Discrete.n` and `Box.shape`. ###Code print(env.action_space.n) print(env.observation_space.shape) ###Output 9 (210, 160, 3) ###Markdown If we're curious, we can find the action meanings by calling, ###Code print(env.unwrapped.get_action_meanings()) ###Output ['NOOP', 'UP', 'RIGHT', 'LEFT', 'DOWN', 'UPRIGHT', 'UPLEFT', 'DOWNRIGHT', 'DOWNLEFT'] ###Markdown I would've guessed that the `action_space` would be just up, down, left, right, but apparently this implementation includes combination actions as well. Theoretically you don't "need" to know these details about the environment you're using because your algorithm should learn a good policy given whatever the available action space is, but I think it's still nice to get a sense of. Key functions for interacting with the environment We will mainly use three functions for interacting with the environment.**`observation = env.reset()`**- This function returns the starting state of an environment. We will call this function any time we want to start a new episode. **`observation, reward, done, info = env.step(action)`**- This function is how your agent takes actions in the environment; it defines the transition and reward function. It takes in an `action` as an argument, and returns the next `observation`(next_state), the `reward` (**float**), if the episode is `Done` (**bool**), and `info`, which we won't be using here but can contain helpful information for debugging. **`action = env.action_space.sample()`**- This is a helpful function for sampling a random action from the `action_space`. We will be using the $\epsilon$-greedy exploration strategy, so we will use this function when we want to select a random action. If we look back at the code for `create_random_policy_video()`, we can see how we used these three functions to get the data for the video. Stripping away all code for plotting, the main loop is: ###Code num_episodes = 10 env = gym.make("MsPacman-v0") for _ in range(num_episodes): observation = env.reset() done = False while not done: action = env.action_space.sample() observation, reward, done, info = env.step(action) ###Output _____no_output_____ ###Markdown In this notebook, we will generally replace the term `observation` with `state` becuase this is the wording we're more familiar with. Implementing RL algorithmsNow that we have all of the setup done, let's get to the fun part of actually implementing the algorithms! CartPole environment For our implementations we are going to use the `CartPole-v1` environment. This is a simple environment where both of our algorithms (Q-learning and DQN) will be able to learn a good policy within a reasonably short amount of time. ###Code env = gym.make("CartPole-v1") ###Output _____no_output_____ ###Markdown The goal in this environment is to move the cart left or right in order to balance the pole so that it remains upright. ###Code create_random_policy_video(env, "video", num_episodes= 10) print(f'Action space: {env.action_space}') print(f'State space: {env.observation_space}') ###Output Action space: Discrete(2) State space: Box(4,) ###Markdown This is a dramatically simpler environment than the MsPacman environment. The `observation_space` is a 4 dimensional array, and there are 2 possible actions. The CartPole [documentation](https://github.com/openai/gym/wiki/CartPole-v0) tells us the meanings of the observation and action spaces. Observation: Type: Box(4) Num Observation Min Max 0 Cart Position -4.8 4.8 1 Cart Velocity -Inf Inf 2 Pole Angle -24 deg 24 deg 3 Pole Velocity At Tip -Inf Inf Actions: Type: Discrete(2) Num Action 0 Push cart to the left 1 Push cart to the right An episode terminates when the pole falls more than 12 degrees from vertical, the cart position moves off-screen, or the number of steps within the episode exceeds 500. Required functions for learning a policy ###Code class Agent: def create_model(self): """This model will be used by the act() method to select actions, and will be updated during training """ pass def act(self, state, test=False): """This function implements your policy and choose actions based on the current model. If test=True, actions chosen without exploration.""" return action def update_model(self): """This function specifies how to update the model based on experience in the environment""" pass def train(self): """The main loop for training the model by selecting actions, interacting with the environment, and updating the model""" pass ###Output _____no_output_____ ###Markdown Once we have a trained model, we can evaluate it's performance using a similar loop to the one above used to visualize the random policy. The only difference is that we replace the `env.action_space.sample()` function call with `agent.act()`. agent = Agent() agent = Agent.train() run trained agent for _ in range(num_episodes): state = agent.env.reset() done = False while not done: action = agent.env.act(state, test=True) next_state, reward, done, info = env.step(action) Evaluation functions Now we can use this loop to generate a video visualizing the learned policy. ###Code def create_learned_policy_video(agent, filename, num_episodes=5, fps=30): """Generates a video of the given agent acting accoring to its learned policy for the specified number of episodes.""" display = Display(visible=0, size=(400, 300)) display.start() filename = filename + ".mp4" with imageio.get_writer(filename, fps=fps) as video: for _ in range(num_episodes): done = False state = agent.env.reset() video.append_data(agent.env.render(mode='rgb_array')) while not done: action = agent.act(state, test=True) state, reward, done, info = agent.env.step(action) video.append_data(agent.env.render(mode='rgb_array')) display.stop() return embed_mp4(filename) ###Output _____no_output_____ ###Markdown We will also want to evaluate the performance of the learned model. For evaluation trials we will not use $\epsilon$-greedy exploration, but instead always choose the best action according to our learned policy. ###Code def evaluate_policy(agent, num_episodes=10): """Runs the agent through the specified number of episodes and prints the average return. """ reward_history = [] for _ in range(num_episodes): state = agent.env.reset() total_reward = 0 done = False while not done: action = agent.act(state, test=True) next_state, reward, done, _ = agent.env.step(action) total_reward += reward state = next_state reward_history.append(total_reward) print("Exploit reward average: {}".format(np.mean(reward_history).round(2))) ###Output _____no_output_____ ###Markdown Q-Learning Tabular Q-learning stores and updates a Q-value estimate for each state-action pair, $Q(s,a)$. Each of these Q-values is stored in a look-up table. **Discretize environment observations**Since the cartpole environment has an `observation_space` with continuous values, the number of Q-values we need to store and update would quickly explode, and also not be very useful. To avoid this we are going to use a wrapper for the environment that transforms the `obervation_space` from a continuous-valued `Box` to a discrete-valued `Discrete`. I got this wrapper from Lillian Weng's Q-learning [implementation](https://github.com/lilianweng/deep-reinforcement-learning-gym/blob/master/playground/utils/wrappers.py) (understanding the details of how she implements this isn't important for what we're focusing on). ###Code class DiscretizedObservationWrapper(gym.ObservationWrapper): """This wrapper converts a Box observation into a single integer. """ def __init__(self, env, n_bins=10, low=None, high=None): super().__init__(env) assert isinstance(env.observation_space, Box) low = self.observation_space.low if low is None else low high = self.observation_space.high if high is None else high self.n_bins = n_bins self.val_bins = [np.linspace(l, h, n_bins + 1) for l, h in zip(low.flatten(), high.flatten())] self.observation_space = Discrete(n_bins ** low.flatten().shape[0]) def _convert_to_one_number(self, digits): return sum([d * ((self.n_bins + 1) ** i) for i, d in enumerate(digits)]) def observation(self, observation): digits = [np.digitize([x], bins)[0] for x, bins in zip(observation.flatten(), self.val_bins)] return self._convert_to_one_number(digits) env = gym.make('CartPole-v1') env_discrete = DiscretizedObservationWrapper(env) print(env_discrete.action_space) print(env_discrete.observation_space) ###Output Discrete(2) Discrete(10000) ###Markdown AlgorithmThe next step is to implement the q-learning algorithm. The method names give you the skeleton of the implementation but the content is left for you to fill in. We've inserted detailed comments to guide your implementation. We've also left some code in that is not essential to the algorithm (e.g decaying the epsilon parameter each step, keeping track of reward history).You need to fill in the content for three methods:- `create_model()` - filled in already- `act()`- `update_model()`- `train()` create_model()We've left the code for `create_model()` filled in because it is only creating a dictionary for storing the Q values. We are using `defaultdict (float)` rather than `dict` for a more efficient implementation. This automatically initializes any key entry to 0.0, rather than returning `KeyError`. ###Code # define environement env = gym.make("CartPole-v1") env_discrete = DiscretizedObservationWrapper( env, n_bins=8, low=np.array([-2.4, -2.0, -0.42, -3.5]), high=np.array([2.4, 2.0, 0.42, 3.5]) ) # get example state-action pair state = env_discrete.reset() action = env_discrete.action_space.sample() # define defaultdict and query the state-action pair example = defaultdict(float) example[state, action] # *no KeyError* ###Output _____no_output_____ ###Markdown act() For our implementation, we will be using the $\epsilon$-greedy exploration policy.\begin{equation} a = \begin{cases} \text{random} & \text{with probability $\epsilon$}\\ \arg\max_a Q(s,a) & \text{otherwise}\\ \end{cases} \end{equation} update_model()This function should update the Q-value estimate using the Q-learning update rule based on the $(s,a,r,s'\text{done})$.$$ Q(s,a) \leftarrow Q(s,a) + \alpha \left[r_t + \gamma \max_{a'} Q(s',a') - Q(s,a) \right]$$If the state is terminal (`done=True`) the update will be $$ Q(s,a) \leftarrow Q(s,a) + \alpha \left[r_t - Q(s,a) \right]$$ train()This function will run the main training loop. Here is the pseudocode for the Q-learning algorithm. create model (initialize q_values) for n_episodes initialize state while not done select action according to policy execute action; observe reward and next_state update modelThis function will be used to train the agent as follows: agent = Agent(env) agent = Agent.train() Remember, these are the environment api calls you will need to use in your implementation.- `observation = env.reset()`- `observation, reward, done, info = env.step(action)`- `action = env.action_space.sample()` Implementation ###Code class QLearning: def __init__(self, env, gamma=0.9, alpha=0.5, epsilon=0.99, epsilon_decay=0.9999, epsilon_min=0.1): self.env = env self.gamma = gamma self.alpha = alpha self.epsilon = epsilon self.epsilon_decay = epsilon_decay self.epsilon_min = epsilon_min self.actions = range(self.env.action_space.n) def create_model(self): """"For Q-learning the model is simply a dictionary for storing the tabular Q values.""" self.Q = defaultdict(float) def act(self, state, test=False): """Choose action based on your current model using epsilon-greedy exploration""" # update epsilon self.epsilon *= self.epsilon_decay self.epsilon = max(self.epsilon_min, self.epsilon) # set epsilon to 0 if testing the learned policy epsilon = 0 if test else self.epsilon # Select a random action with probability epsilon, otherwise select # the action with the highest Q-value. In cases where multiple actions # have the same highest Q-value, return a random choice between these # actions. # -------------------------------------------- # Your code here # -------------------------------------------- return action def update_model(self, state, action, reward, next_state, done): """Update Q(s,a) using the Q-learning update rule.""" # Appy the learning rule to update self.Q[state, action]. # -------------------------------------------- # Your code here # -------------------------------------------- def train(self, num_episodes=20): """This is the main training loop for interacting with the environment and updating your model. We've left in code for storing training history.""" # keep track of reward history self.reward_history = [] # initialize Q-values self.create_model() for episode in range(num_episodes): total_reward = 0.0 # Implement the training loop for Q-learning (pseudocode above). # -------------------------------------------- # Your code here # -------------------------------------------- # Save total reward from epsisode and print training progress. self.reward_history.append(total_reward) if episode % 500 == 0: print("episode {}: {} average reward".format( episode, np.mean(self.reward_history[max(0,episode-500):episode+1]).round(2))) ###Output _____no_output_____ ###Markdown Training an agent Once you have your algorithm implemented, let's train an agent! ###Code env = gym.make('CartPole-v1') env_discrete = DiscretizedObservationWrapper( env, n_bins=8, low=np.array([-2.4, -2.0, -0.42, -3.5]), high=np.array([2.4, 2.0, 0.42, 3.5]) ) seed = 0 env.seed(seed) env.action_space.seed(seed) np.random.seed(seed) random.seed(seed) qlearning_agent = QLearning(env_discrete) qlearning_agent.train(num_episodes=5000) # visualize total reward per episode across training plt.figure(figsize=(7,4)) plt.plot(qlearning_agent.reward_history, alpha=.3, color='teal', label='raw') plt.plot(np.convolve(qlearning_agent.reward_history, np.ones((50,))/50, mode='valid'), color='purple', label='smoothed') plt.xlabel('episode #', fontsize=15) plt.ylabel('total reward per episode', fontsize=15) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Evaluating the agent First, let's see what the average reward is across 100 trials when the agent is exploiting its learned policy (not using $\epsilon$-greedy exploration). Now, let's visualize the agent acting according to its learned policy. ###Code evaluate_policy(qlearning_agent, num_episodes=100) create_learned_policy_video(qlearning_agent, "video", num_episodes=1) ###Output xdpyinfo was not found, X start can not be checked! Please install xdpyinfo! WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (400, 600) to (400, 608) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned. ###Markdown Woo, it learned something! This is definitely an improvement from random, but it's certainly not optimal (a reward of 500 is optimal). This agent could get better with more training but it would probably take a long time for it to reach optimal performance. What is the model learning?We can tell that the model has learned something, but it would be nice to get some idea of what it's learned. In order to get a sense of this, we can visualize the learned Q-values across a set of states. For this example, we are going to plot Q-values as a function of pole velocity, while the cart position, cart velocity, and pole angle are all 0 (the pole is in the center, not moving, and upright). Intuitively, the agent should have learned to push the cart right if the pole velocity is to the right (>0), and to push the cart left if the pole velocity is to the left (<0). ###Code n_obs = 40 obs_array = np.zeros((n_obs, env.observation_space.shape[0])) obs_array[:,0] = 0 obs_array[:,1] = 0 obs_array[:,2] = 0 obs_array[:,3] = np.linspace(-5, 5, n_obs) # run model q_values = np.zeros((n_obs, env.action_space.n)) for i, obs in enumerate(obs_array): obs_disctete = env_discrete.observation(obs) q_values[i] = [qlearning_agent.Q[obs_disctete, action] for action in qlearning_agent.actions] # visualize results plt.figure(figsize=(8,5)) plt.plot(obs_array[:,3], q_values[:,0], color='purple', label='push cart left', linewidth=2) plt.plot(obs_array[:,3], q_values[:,1], color='green', label='push cart right', linewidth=2) plt.vlines(0, q_values.min(), q_values.max(), linestyle='--', color='dimgray') plt.xlabel('Pole Velocity', fontsize=15) plt.ylabel('Q value', fontsize=15) plt.title('Cart Position=0, Cart Velocity=0, Pole Angle=0', fontsize=15) plt.legend(fontsize=13) plt.show() ###Output _____no_output_____ ###Markdown It does what we expect! The Q-values for a=right are larger than the Q-values for a=left when the pole velocity is greater than 0, and vice versa for when the pole velocity is less than 0. Deep Q-Networks (DQN) Now that we've implemented Q-learning, let's move on to implementing DQNs! AlgorithmSimilar to Q-Learning, the method names for the DQN class give you the skeleton of the implementation, while the content is left for you to fill in. For DQNs you will need to write a few more functions than you needed for Q-Learning.You need to fill in the content for six functions:- `create_model` - filled in already- `act()` - `remember()`- `update_model()`- `update_target()` - fillded in already- `train()` create_model() For this implementation, we're going to use a two-layer densely connected network. This network will take in a state as input and output a Q-value estimate for each action within this state. Consequently, the input dim is the same as the `observation_space` shape (4 for the CartPole environment), and the output dim is the same as the `action_space` shape (2 for the CartPole environment). We've left the code for `create_model()` filled in because it is largely determined by learning TensorFlow syntax, which isn't our focus here. We will use ReLu activation function and the Adam optimizer.We will use mean squared error loss, as specified by the DQN loss function. def create_model(self): model = Sequential() model.add(Dense(24, input_dim=self.state_shape[0], activation="relu")) model.add(Dense(16, activation="relu")) model.add(Dense(self.env.action_space.n)) model.compile(loss="mean_squared_error", optimizer=Adam(lr=self.learning_rate)) return model act() We will again use the $\epsilon$-greedy exploration policy. \begin{equation} a = \begin{cases} \text{random} & \text{with probability $\epsilon$}\\ \arg\max_a Q(s,a;\theta) & \text{otherwise}\\ \end{cases} \end{equation}To get the Q-values from your Q-network, you can run q_values = self.model.predict(state) remember()After each step, we need to store the $(s, a,r,s', \text{done})$ experience in the replay memory. In this implementation we will store memories in a `deque` with a specified maximum legnth (memory capacity). ###Code replay_memory = deque(maxlen=5) print(replay_memory) for i in range(7): replay_memory.append(i) print(replay_memory) # in your implementation you will append the experience # [state, action, reward, next_state, done] instead of i ###Output deque([], maxlen=5) deque([0], maxlen=5) deque([0, 1], maxlen=5) deque([0, 1, 2], maxlen=5) deque([0, 1, 2, 3], maxlen=5) deque([0, 1, 2, 3, 4], maxlen=5) deque([1, 2, 3, 4, 5], maxlen=5) deque([2, 3, 4, 5, 6], maxlen=5) ###Markdown update_model()To update the model, you'll need to:1. sample a batch of experiences $(s_j, a_j, r_j, s_j', \text{done}_j)$ from memory2. calculate the target output (`done` indicates if $s$ is terminal)\begin{equation} y_j = \begin{cases} r_j + \gamma \max_{a'} Q(s_j',a'; \theta^-) & \text{if $s$ is not terminal}\\ r_j & \text{if $s$ is terminal}\\ \end{cases} \end{equation}3. Perform gradient descent step according to$$ L(\theta) = \left\langle \big( y_j - Q(s_j,a_j; \theta) \big)^2\right\rangle_{(s,a,r,s') \sim Uniform(Memory)}$$For the third step, the TensorFlow code you will need is model.fit(batch_states, batch_target, epochs=1, verbose=0)**NOTE**The `batch_target` must be the same dimensions as the model output. This means you must have a target for every action for each input state in your batch of experiences $(s_j,a_j,r_j,s_j')$. For each action $a$ that is not in the experience batch, use the current output of the target model as the target value, $Q(s,a;\theta^-)$. Therefore, for each state $s_j$, \begin{equation} \text{target} = \begin{cases} y_j & \text{if $a$ is in experience batch $(s_j, a_j, r_j, s_j')$}\\ Q(s,a;\theta^-) & \text{if $a$ is NOT in experience batch $(s_j, a_j, r_j, s_j')$}\\ \end{cases} \end{equation}**NOTE 2**Here is a helpful line of code for reformating samples from memory, each of which will be a list `[state, action, reward, next_state, done]`, to a set of `np.array`s with dim (n_batch x __). batch_states, batch_actions, batch_rewards, batch_next_states, batch_done = map(np.asarray, zip(*memory_samples)) update_tartget()This function is used to set the target network weights equal to the model network weights. This is only done periodically thoughout training, which reduces variance in the gradient across steps and stabilizes training. We've left the code for `update_target()` filled in because, again, it is largely just Tensorflow syntax. You'll have to use this function appropriately within the main training loop. def update_target(self): weights = self.model.get_weights() self.target_model.set_weights(weights) train()This function will run the main training loop. Here is the pseudocode for the DQN algorithm. initialize Q-network (create model) initialize target network (create model and set weights equal to Q-network) for n_episodes initialize state while not done select action according to policy execute action; observe reward and next_state add experience (state, action, reward, next_state, done) to memory sample batch of (state, action, reward, next_state, done) experiences from memory and update model every C steps, update target model Same as for Q-learning, the Gym api calls you will need are:- `observation = env.reset()`- `observation, reward, done, info = env.step(action)`- `action = env.action_space.sample()`The Tensorflow api calls you will need are: - `model_output = model.predict(model_input)`- `model.fit(model_input, model_target, epochs=1, verbose=0)` Implementation ###Code class DQN: def __init__(self, env, memory_cap=1000, gamma=0.9, epsilon=0.99, epsilon_decay=0.995, epsilon_min=0.01, learning_rate=0.005, batch_size=32, C=20): self.env = env self.memory = deque(maxlen=memory_cap) self.state_shape = env.observation_space.shape self.gamma = gamma self.epsilon = epsilon self.epsilon_min = epsilon_min self.epsilon_decay = epsilon_decay self.learning_rate = learning_rate self.batch_size = batch_size self.C = C def create_model(self): """We will use a two-layer perceptron. The input dim must equal the state space dim and the output dim must equal the action space dim, but you can play around with the size of the hidden layers. For DQNs, we need mean squared error loss.""" model = Sequential() model.add(Dense(24, input_dim=self.state_shape[0], activation="relu")) model.add(Dense(16, activation="relu")) model.add(Dense(self.env.action_space.n)) model.compile(loss="mean_squared_error", optimizer=Adam(lr=self.learning_rate)) return model def act(self, state, test=False): """Choose action based on your current model using epsilon-greedy exploration""" # update epsilon self.epsilon *= self.epsilon_decay self.epsilon = max(self.epsilon_min, self.epsilon) # set epsilon to 0 if testing the learned policy epsilon = 0.01 if test else self.epsilon # reshape state to feed into model - tensorflow thing, shape must be # (1, input_dim), not (input_dim,) state = state.reshape((1, self.state_shape[0])) # Select a random action with probability epsilon, otherwise select # the action with the highest Q-value estimate. # -------------------------------------------- # Your code here # -------------------------------------------- return action def remember(self, state, action, reward, new_state, done): """Append experience to memory""" # -------------------------------------------- # Your code here # -------------------------------------------- def update_model(self): """This function updates the q-network model. You must 1) sample a batch of experiences from the replay memory 2) calculate the target for each expereince, 3) update the model by calling model.fit()""" # only update model once have sufficient number of experiences in memory if len(self.memory) < self.batch_size: return # 1. Sample a batch of experiences from memory # 2. Reformat the samples into a set of np.arrays (convenient code for this provided above) # 3. Calculate the target for batch of experiences # 4. Get the batch_target for fitting model (using target network predictions # for actions not in experience batch) # 5. Update the model by running model.fit(batch_states, batch_target, epoch=1, verbose=0) # -------------------------------------------- # Your code here # -------------------------------------------- def update_target(self): """"Sets target weights equal to model weights.""" weights = self.model.get_weights() self.target_model.set_weights(weights) def train(self, num_episodes=50): """This function implements the main training loop.""" # keep track of total reward per episode self.reward_history = [] # initialize model and target model self.model = self.create_model() self.target_model = self.create_model() self.target_model.set_weights(self.model.get_weights()) # we need to keep track of steps now so we can update the model every # C steps step = 0 for episode in range(num_episodes): total_reward = 0 # Implement the training loop for DQN (pseudocode above). # -------------------------------------------- # Your code here # -------------------------------------------- # Save total reward from epsisode and print training progress. self.reward_history.append(total_reward) print("episode {}: {} reward".format(episode, total_reward)) ###Output _____no_output_____ ###Markdown Training an agent ###Code env = gym.make('CartPole-v1') seed = 2 env.seed(seed) env.action_space.seed(seed) np.random.seed(seed) random.seed(seed) tf.random.set_seed(seed) dqn_agent = DQN(env, batch_size=32) dqn_agent.train(num_episodes=35) # visualize total reward per episode across training plt.figure(figsize=(7,4)) plt.plot(dqn_agent.reward_history, alpha=1, color='purple', label='raw') plt.xlabel('episode #', fontsize=15) plt.ylabel('total reward per episode', fontsize=15) plt.title('DQN Agent') plt.show() ###Output _____no_output_____ ###Markdown Evaluating the agent ###Code evaluate_policy(dqn_agent, num_episodes=1) create_learned_policy_video(dqn_agent, "video", num_episodes=1) ###Output xdpyinfo was not found, X start can not be checked! Please install xdpyinfo! WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (400, 600) to (400, 608) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned. ###Markdown Yay, it learned the task! This is also not quite optimal, but its only been 35 sessions! This is a huge change from our Q-learning agent, which took 5000 episode to reach less than the perfomance of our DQN agents. This tells us how useful it is to keep the continous-valued inputs and use a neural network to appromimate the Q-value function. What is the model learning? ###Code n_obs = 40 obs_array = np.zeros((n_obs, env.observation_space.shape[0])) obs_array[:,0] = 0 obs_array[:,1] = 0 obs_array[:,2] = 0 obs_array[:,3] = np.linspace(-5, 5, n_obs) # run model q_values = dqn_agent.model.predict(obs_array) # visualize results plt.figure(figsize=(8,5)) plt.plot(obs_array[:,3], q_values[:,0], color='purple', label='push cart left', linewidth=2) plt.plot(obs_array[:,3], q_values[:,1], color='green', label='push cart right', linewidth=2) plt.vlines(0, q_values.min(), q_values.max(), linestyle='--', color='dimgray') plt.xlabel('Pole Velocity', fontsize=15) plt.ylabel('Q value', fontsize=15) plt.title('Cart Position=0, Cart Velocity=0, Pole Angle=0', fontsize=15) plt.legend(fontsize=13) plt.show() ###Output _____no_output_____
pfb_introduction.ipynb
###Markdown An interactive introduction to polyphase filterbanks**Author:** Danny Price, UC Berkeley**License:** [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) ###Code %matplotlib inline # Import required modules import numpy as np import scipy from scipy.signal import firwin, freqz, lfilter import matplotlib.pyplot as plt import seaborn as sns sns.set_style("white") def db(x): """ Convert linear value to dB value """ return 10*np.log10(x) ###Output _____no_output_____ ###Markdown IntroductionIf you've opened up this notebook, you're probably trying to learn about polyphase filterbanks and/or spectrometers and found it all a bit confusing. This notebook is here to help. To get the most out of this notebook, you should supplement it with a more rigorous overview of the PFB and spectrometers. I've written up a [chapter on spectrometers in radio astronomy](http://arxiv.org/abs/1607.03579) which can serve as your noble steed. There is quite a bit of background knowledge about digital signal processing (DSP) that I'm not going to present -- head on over to the free [DSP Guide](http://www.dspguide.com/ch1.htm) by Stephen Smith if you need a refresher. What is a PFB?A polyphase filterbank (PFB) is simply an efficient computational structure used to form a bank of filters. All that is required to form a PFB is to place a "prototype polyphase filter structure" in front of an FFT. The frontend enhances the filter response of the FFT, making it better by using time samples and filter coefficients.That's it. For more information, have a read of [this chapter](http://arxiv.org/abs/1607.03579). As a first call though, let's look at polyphase decomposition, and how to do it using `Numpy`. Polyphase decompositionPolyphase decomposition is at the heart of the PFB technique, and is just decomposing a signal $x(n)$ into multiple 'phases' or 'branches'. For example, even and odd decomposition is just:$$\begin{eqnarray}x_{even}(n') & = & \left\{ x(0),x(2),x(4),...\right\} \\x_{odd}(n') & = & \left\{ x(1),x(3),x(5),...\right\} .\end{eqnarray}$$More generally, we can decompose $x(n)$ into $P$ phases, denoted $x_p(n')$. Below is a simple example of polyphase decomposition using numpy: ###Code x = np.array([1,2,3,4,5,6,7,8,9,10]) P = 5 x_p = x.reshape((len(x)//P, P)).T print (x_p) ###Output [[ 1 6] [ 2 7] [ 3 8] [ 4 9] [ 5 10]] ###Markdown The PFB frontendNext, let's have a look at the polyphase frontend. This sounds fancy but isn't all that complicated. The purpose of the PFB frontend is to convert your set of $P$ polyphase branches $x_p(n')$ into a set of subfiltered signals, $y_p(n')$$$\begin{equation}y_{p}(n')=\sum_{m=0}^{M-1}h_{p}(m)x_{p}(n'-m),\end{equation}$$where $h_p$ are filter coefficients that have been divided between the $P$ branches.Here is a diagram showing the operations performed by the frontend, for $M=3$ taps:![pfb_chart](diagrams/pfb_chart.png)The diagram shows an input signal being divided into $M$ taps, each with $P$ points. Within each tap, the signal is multiplied by the filter coefficients, then a sum across taps is performed. After this, another $P$ points are read, and the signals propagate left-to-right into the next tap (following the arrows).Not 100% sure you really understand that diagram? Well, let's try and code it up, and hopefully get a better handle on what's happening. Here's a simple implementation: ###Code def pfb_fir_frontend(x, win_coeffs, M, P): W = int(x.shape[0] / M / P) x_p = x.reshape((W*M, P)).T h_p = win_coeffs.reshape((M, P)).T x_summed = np.zeros((P, M * W - M)) for t in range(0, M*W-M): x_weighted = x_p[:, t:t+M] * h_p x_summed[:, t] = x_weighted.sum(axis=1) return x_summed.T ###Output _____no_output_____ ###Markdown Wow. Only 9 lines required! This is short enough for us to go through line by line:1. Function declaration. The frontend reads in: * an input signal x (a numpy array). For this simple code, x has to be a multiple of $M*P$ * some window coefficients, * an integer M representing the number of taps * an integer P representing the number of branches2. Compute the number of windows of length $P$ there are in the data.3. We apply polyphase decomposition on $x(n)$ to get a set of branches $x_p(n')$.4. We also divide the window coefficients into branches.6. Instantiate an empty array to store the signal $y_p(n')$. This is a little shorter than the original $x_p(n')$ as it takes a few cycles for the taps to fill up with data.7. Now we start a loop, so we can multiply through each time step by the filter coefficients. 8. This is the magic line. we take $M$ samples from each branch, $x_p(n')$, and multiply it through by the filter coefficients. We need to march through the entire `x_p` array, hence the loop.9. Now we sum over taps.10. Return the data, with a transpose so that axes are returned as (time, branch).Let's apply this to some example data. To do that, we'll need a function to generate window coefficients. Fortunately, this is built in to `scipy`. We can make a simple function to generate a `sinc` of the right length and multiply it through by the window of our choice: ###Code def generate_win_coeffs(M, P, window_fn="hamming"): win_coeffs = scipy.signal.get_window(window_fn, M*P) sinc = scipy.signal.firwin(M * P, cutoff=1.0/P, window="rectangular") win_coeffs *= sinc return win_coeffs M = 8 P = 32 x = np.sin(np.arange(0, M*P*10) / np.pi) win_coeffs = generate_win_coeffs(M, P, window_fn="hamming") plt.subplot(2,1,1) plt.title("Time samples") plt.plot(x) plt.xlim(0, M*P*3) plt.subplot(2,1,2) plt.title("Window function") plt.plot(win_coeffs) plt.xlim(0, M*P) plt.tight_layout(pad=1.0) plt.show() ###Output _____no_output_____ ###Markdown Now we are ready to try applying `pfb_fir_frontend` to our data: ###Code y_p = pfb_fir_frontend(x, win_coeffs, M, P) print("n_taps: %i" % M) print("n_branches: %i" % P) print("Input signal shape: %i" % x.shape) print("Window shape: %i" % win_coeffs.shape) print("Output data shape: %s" % str(y_p.shape)) ###Output n_taps: 8 n_branches: 32 Input signal shape: 2560 Window shape: 256 Output data shape: (72, 32) ###Markdown And we can plot the output `y_p` using `imshow`: ###Code plt.figure() plt.imshow(y_p) plt.xlabel("Branch") plt.ylabel("Time") plt.figure() plt.plot(y_p[0], label="p=0") plt.plot(y_p[1], label="p=1") plt.plot(y_p[2], label="p=2") plt.xlabel("Time sample, $n'$") plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Don't spend too much time trying to interpret this! The frontend only becomes interesting when you follow it up with an FFT. Polyphase filterbanknow we have an PFB frontend, all we need is to add on an FFT. Here is the code to implement a simple PFB in python: ###Code def fft(x_p, P, axis=1): return np.fft.rfft(x_p, P, axis=axis) def pfb_filterbank(x, win_coeffs, M, P): x_fir = pfb_fir_frontend(x, win_coeffs, M, P) x_pfb = fft(x_fir, P) return x_pfb ###Output _____no_output_____ ###Markdown The first function is just a helper, and uses the in-built `numpy.fft` library. We apply the FFT over a given axis, which in this case is branches (the number of branches == length of FFT).The actual `pfb_filterbank` function is now just two lines long: apply a `pfb_fir_frontend` to the data, and then apply an `fft` to the output. The final step is taking the output of the `pfb_filterbank`, squaring it, and taking an average over time. Finally, here's a function that implements a spectrometer: ###Code def pfb_spectrometer(x, n_taps, n_chan, n_int, window_fn="hamming"): M = n_taps P = n_chan # Generate window coefficients win_coeffs = generate_win_coeffs(M, P, window_fn) # Apply frontend, take FFT, then take power (i.e. square) x_fir = pfb_fir_frontend(x, win_coeffs, M, P) x_pfb = fft(x_fir, P) x_psd = np.abs(x_pfb)**2 # Trim array so we can do time integration x_psd = x_psd[:np.round(x_psd.shape[0]//n_int)*n_int] # Integrate over time, by reshaping and summing over axis (efficient) x_psd = x_psd.reshape(x_psd.shape[0]//n_int, n_int, x_psd.shape[1]) x_psd = x_psd.mean(axis=1) return x_psd ###Output _____no_output_____ ###Markdown Let's try it out by generating some data ###Code M = 4 # Number of taps P = 1024 # Number of 'branches', also fft length W = 1000 # Number of windows of length M*P in input time stream n_int = 2 # Number of time integrations on output data # Generate a test data steam samples = np.arange(M*P*W) noise = np.random.normal(loc=0.5, scale=0.1, size=M*P*W) freq = 1 amp = 0.02 cw_signal = amp * np.sin(samples * freq) data = noise + cw_signal ###Output _____no_output_____ ###Markdown Which we can have a quick look at first: ###Code plt.subplot(3,1,1) plt.title("Noise") plt.plot(noise[:250]) plt.subplot(3,1,2) plt.title("Sin wave") plt.plot(cw_signal[:250]) plt.subplot(3,1,3) plt.title("Noise + sin") plt.plot(data[:250]) plt.xlabel("Time samples") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Now, let's compute the spectrum and plot it over frequency vs. time using `imshow` ###Code X_psd = pfb_spectrometer(data, n_taps=M, n_chan=P, n_int=2, window_fn="hamming") plt.imshow(db(X_psd), cmap='viridis', aspect='auto') plt.colorbar() plt.xlabel("Channel") plt.ylabel("Time") plt.show() ###Output _____no_output_____ ###Markdown This plot over frequency vs. time is known as a *waterfall plot*. At the moment, we can't see the sin wave we put in there. If we integrate longer, the noise integrates down as $\sqrt{t}$ (see the radiometer equation), whereas the sin wave is coherent. Using a longer time integration: ###Code X_psd2 = pfb_spectrometer(data, n_taps=M, n_chan=P, n_int=1000, window_fn="hamming") plt.plot(db(X_psd[0]), c='#cccccc', label='short integration') plt.plot(db(X_psd2[1]), c='#cc0000', label='long integration') plt.ylim(-50, -30) plt.xlim(0, P/2) plt.xlabel("Channel") plt.ylabel("Power [dB]") plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Testing leakage with sin wavesIs the PFB's spectral leakage as good as people claim? We can test this out by sweeping a sine wave input and looking at the response of a few channels as a function of sine wave period. ###Code M, P, W = 6, 512, 256 # taps, channels, windows period = np.linspace(0, 0.025, 101) chan0_val = [] chan1_val = [] chan2_val = [] for p in period: t = np.arange(0, M*P*W) x = np.sin(t * p) + 0.001 X_psd = pfb_spectrometer(x, n_taps=M, n_chan=P, n_int=256, window_fn="hamming") chan0_val.append(X_psd[0, 0]) chan1_val.append(X_psd[0, 1]) chan2_val.append(X_psd[0, 2]) plt.plot(period, db(chan0_val)) plt.plot(period, db(chan1_val)) plt.plot(period, db(chan2_val)) plt.xlim(period[0], period[-1]) plt.ylabel("Power [dB]") plt.xlabel("Input sine wave period") plt.show() ###Output _____no_output_____
docs/source/examples/Using NEOS package.ipynb
###Markdown Analyzing NEOs NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.frames import Planes from poliastro.plotting import StaticOrbitPlotter ###Output _____no_output_____ ###Markdown Small Body Database (SBDB) ###Code eros = Orbit.from_sbdb("Eros") eros.plot(label="Eros"); ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = Orbit.from_sbdb("1036") # Ganymed IAU number amor = Orbit.from_sbdb("2001221") # Amor SPK-ID eros = Orbit.from_sbdb("2000433") # Eros SPK-ID frame = StaticOrbitPlotter(plane=Planes.EARTH_ECLIPTIC) frame.plot(ganymed, label="Ganymed") frame.plot(amor, label="Amor") frame.plot(eros, label="Eros"); ###Output _____no_output_____ ###Markdown You can use the wildcards from that browser: `*` and `?`. Keep it in mind that `from_sbdb()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code try: Orbit.from_sbdb("*alley") except ValueError as err: print(err) ###Output 6 different objects found: 903 Nealley (A918 RH) 2688 Halley (1982 HG1) 14182 Alley (1998 WG12) 21651 Mission Valley (1999 OF1) 36445 Smalley (2000 QU) 1P/Halley ###Markdown Note that epoch is provided by the service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale="tdb", format="jd") eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name("atira")[0] # NEO wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO. frame = StaticOrbitPlotter() frame.plot(atira, label="Atira (NEO)") frame.plot(wikipedia, label="Wikipedia (asteroid)"); ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name("1P") frame = StaticOrbitPlotter() frame.plot(halleys[0], label="Halley") frame.plot(halleys[5], label="Halley") frame.plot(halleys[10], label="Halley") frame.plot(halleys[20], label="Halley") frame.plot(halleys[-1], label="Halley"); ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](https://docs.poliastro.space/en/latest/api/safe/neos/dastcom5_parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[ :20 ] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close) With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](https://docs.poliastro.space/en/latest/api/safe/neos/dastcom5_parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983 axis_condition = ast_db["A"] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :)We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one, using the `ASTNAM` property of DASTCOM5 database: ###Code from poliastro.bodies import Earth frame = StaticOrbitPlotter() frame.plot_body_orbit(Earth, time.Time.now().tdb) for record in atiras["NO"]: ss = dastcom5.orbit_from_record(record) if ss.ecc < 1: frame.plot(ss, color="#666666") else: print(f"Skipping hyperbolic orbit: {record}") ###Output Skipping hyperbolic orbit: 50196429 Skipping hyperbolic orbit: 50384186 Skipping hyperbolic orbit: 50401362 Skipping hyperbolic orbit: 50405270 ###Markdown If we needed also the names of each asteroid, we could do: ###Code frame = StaticOrbitPlotter() frame.plot_body_orbit(Earth, time.Time.now().tdb) for i in range(len(atiras)): record = atiras["NO"][i] label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record) if ss.ecc < 1: frame.plot(ss, label=label) else: print(f"Skipping hyperbolic orbit: {label}") ###Output Skipping hyperbolic orbit: 2013 CA134 Skipping hyperbolic orbit: 'Oumuamua Skipping hyperbolic orbit: A/2019 G4 Skipping hyperbolic orbit: A/2019 O3 ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[ db.NAME == "Halley" ] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983 axis_condition = db["A"] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db["A"] < 1.3) & (db["A"] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown Using NEOS package With the new `poliastro` version (0.7.0), a new package is included: [NEOs package](file:///C:/Users/Antonio/Desktop/Proyectos/poliastro/docs/source/html_output/api.htmlmodule-poliastro.neos).The docstrings of this package states the following:> Functions related to NEOs and different NASA APIs. All of them are coded as part of SOCIS 2017 proposal.So, first of all, an important question: What are NEOs?NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import OrbitPlotter2D ###Output _____no_output_____ ###Markdown NeoWS moduleThis module make requests to [NASA NEO Webservice](https://api.nasa.gov/api.htmlNeoWS), so you'll need an internet connection to run the next examples.The simplest `neows` function is `orbit_from_name()`, which return an Orbit object given a name: ###Code from poliastro.neos import neows eros = neows.orbit_from_name("Eros") frame = OrbitPlotter2D() frame.plot(eros, label="Eros") ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = neows.orbit_from_name("1036") # Ganymed IAU number amor = neows.orbit_from_name("2001221") # Amor SPK-ID eros = neows.orbit_from_spk_id("2000433") # Eros SPK-ID frame = OrbitPlotter2D() frame.plot(ganymed, label="Ganymed") frame.plot(amor, label="Amor") frame.plot(eros, label="Eros") ###Output _____no_output_____ ###Markdown Since `neows` relies on [Small-Body Database browser](https://ssd.jpl.nasa.gov/sbdb.cgi) to get the SPK-ID given a body name, you can use the wildcards from that browser: `*` and `?`. Keep it in mind that `orbit_from_name()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code neows.orbit_from_name("*alley") ###Output _____no_output_____ ###Markdown Note that epoch is provided by the Web Service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale="tdb", format="jd") eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown Given that we are using NASA APIs, there is a maximum number of requests. If you want to make many requests, it is recommended getting a [NASA API key](https://api.nasa.gov/index.htmlapply-for-an-api-key). You can use your API key adding the `api_key` parameter to the function: ###Code neows.orbit_from_name("Toutatis", api_key="DEMO_KEY") ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name("atira")[0] # NEO wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO. frame = OrbitPlotter2D() frame.plot(atira, label="Atira (NEO)") frame.plot(wikipedia, label="Wikipedia (asteroid)") ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name("1P") frame = OrbitPlotter2D() frame.plot(halleys[0], label="Halley") frame.plot(halleys[5], label="Halley") frame.plot(halleys[10], label="Halley") frame.plot(halleys[20], label="Halley") frame.plot(halleys[-1], label="Halley") ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](https://docs.poliastro.space/en/latest/api/safe/neos/dastcom5_parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[ :20 ] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close): With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](https://docs.poliastro.space/en/latest/dastcom5 parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983 axis_condition = ast_db["A"] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :) ###Code from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) ###Output _____no_output_____ ###Markdown We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one: We are going to use `ASTNAM` property of DASTCOM5 database: ###Code import matplotlib.pyplot as plt plt.ion() from poliastro.plotting.static import StaticOrbitPlotter frame = StaticOrbitPlotter() frame.plot(earth, label="Earth") for record in atiras["NO"]: ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, color="#666666") ###Output /home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/propagation.py:230: UserWarning: Frame <class 'astropy.coordinates.builtin_frames.icrs.ICRS'> does not support 'obstime', time values were not returned ###Markdown If we needed also the names of each asteroid, we could do: ###Code frame = StaticOrbitPlotter() frame.plot(earth, label="Earth") for i in range(len(atiras)): record = atiras["NO"][i] label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, label=label) ###Output _____no_output_____ ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[ db.NAME == "Halley" ] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983 axis_condition = db["A"] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db["A"] < 1.3) & (db["A"] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown Using NEOS package With the new `poliastro` version (0.7.0), a new package is included: [NEOs package](file:///C:/Users/Antonio/Desktop/Proyectos/poliastro/docs/source/html_output/api.htmlmodule-poliastro.neos).The docstrings of this package states the following:> Functions related to NEOs and different NASA APIs. All of them are coded as part of SOCIS 2017 proposal.So, first of all, an important question: What are NEOs?NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code import matplotlib.pyplot as plt plt.ion() from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import OrbitPlotter ###Output _____no_output_____ ###Markdown NeoWS moduleThis module make requests to [NASA NEO Webservice](https://api.nasa.gov/api.htmlNeoWS), so you'll need an internet connection to run the next examples.The simplest `neows` function is `orbit_from_name()`, which return an Orbit object given a name: ###Code from poliastro.neos import neows eros = neows.orbit_from_name('Eros') frame = OrbitPlotter() frame.plot(eros, label='Eros') ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = neows.orbit_from_name('1036') # Ganymed IAU number amor = neows.orbit_from_name('2001221') # Amor SPK-ID eros = neows.orbit_from_spk_id('2000433') # Eros SPK-ID frame = OrbitPlotter() frame.plot(ganymed, label='Ganymed') frame.plot(amor, label='Amor') frame.plot(eros, label='Eros') ###Output _____no_output_____ ###Markdown Since `neows` relies on [Small-Body Database browser](https://ssd.jpl.nasa.gov/sbdb.cgi) to get the SPK-ID given a body name, you can use the wildcards from that browser: `*` and `?`. Keep it in mind that `orbit_from_name()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code neows.orbit_from_name('*alley') ###Output _____no_output_____ ###Markdown Note that epoch is provided by the Web Service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale='tdb', format='jd') eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown Given that we are using NASA APIs, there is a maximum number of requests. If you want to make many requests, it is recommended getting a [NASA API key](https://api.nasa.gov/index.htmlapply-for-an-api-key). You can use your API key adding the `api_key` parameter to the function: ###Code neows.orbit_from_name('Toutatis', api_key='DEMO_KEY') ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name('atira')[0] # NEO wikipedia = dastcom5.orbit_from_name('wikipedia')[0] # Asteroid, but not NEO. frame = OrbitPlotter() frame.plot(atira, label='Atira (NEO)') frame.plot(wikipedia, label='Wikipedia (asteroid)') ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name('1P') frame = OrbitPlotter() frame.plot(halleys[0], label='Halley') frame.plot(halleys[5], label='Halley') frame.plot(halleys[10], label='Halley') frame.plot(halleys[20], label='Halley') frame.plot(halleys[-1], label='Halley') ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](http://docs.poliastro.space/en/latest/dastcom5 parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[:20] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close): With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](http://docs.poliastro.space/en/latest/dastcom5 parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db['A'] - ast_db['QR'] < 0.983 axis_condition = ast_db['A'] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :) ###Code from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) ###Output _____no_output_____ ###Markdown We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one: We are going to use `ASTNAM` property of DASTCOM5 database: ###Code frame = OrbitPlotter() frame.plot(earth, label='Earth') for record in atiras['NO']: ss = dastcom5.orbit_from_record(record) frame.plot(ss, color="#666666") ###Output _____no_output_____ ###Markdown This slightly incorrect, given that Earth coordinates are in a different frame from asteroids. However, for the purpose of this notebook, the effect is barely noticeable. If we needed also the names of each asteroid, we could do: ###Code frame = OrbitPlotter() frame.plot(earth, label='Earth') for i in range(len(atiras)): record = atiras['NO'][i] label = atiras['ASTNAM'][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record) frame.plot(ss, label=label) ###Output _____no_output_____ ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[db.NAME == 'Halley'] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db['A'] - db['QR']) < 0.983 axis_condition = db['A'] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db['A'] < 1.3) & (db['A'] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown Analyzing NEOs NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import StaticOrbitPlotter ###Output _____no_output_____ ###Markdown Small Body Database (SBDB) ###Code eros = Orbit.from_sbdb("Eros") eros.plot(label="Eros"); ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = Orbit.from_sbdb("1036") # Ganymed IAU number amor = Orbit.from_sbdb("2001221") # Amor SPK-ID eros = Orbit.from_sbdb("2000433") # Eros SPK-ID frame = StaticOrbitPlotter() frame.plot(ganymed, label="Ganymed") frame.plot(amor, label="Amor") frame.plot(eros, label="Eros"); ###Output _____no_output_____ ###Markdown You can use the wildcards from that browser: `*` and `?`. Keep it in mind that `from_sbdb()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code Orbit.from_sbdb("*alley") ###Output _____no_output_____ ###Markdown Note that epoch is provided by the service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale="tdb", format="jd") eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name("atira")[0] # NEO wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO. frame = StaticOrbitPlotter() frame.plot(atira, label="Atira (NEO)") frame.plot(wikipedia, label="Wikipedia (asteroid)"); ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name("1P") frame = StaticOrbitPlotter() frame.plot(halleys[0], label="Halley") frame.plot(halleys[5], label="Halley") frame.plot(halleys[10], label="Halley") frame.plot(halleys[20], label="Halley") frame.plot(halleys[-1], label="Halley"); ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](https://docs.poliastro.space/en/latest/api/safe/neos/dastcom5_parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[ :20 ] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close) With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](https://docs.poliastro.space/en/latest/dastcom5 parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983 axis_condition = ast_db["A"] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :)We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one, using the `ASTNAM` property of DASTCOM5 database: ###Code from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) frame = StaticOrbitPlotter() frame.plot(earth, label=Earth) for record in atiras["NO"]: ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, color="#666666") ###Output _____no_output_____ ###Markdown If we needed also the names of each asteroid, we could do: ###Code frame = StaticOrbitPlotter() frame.plot(earth, label=Earth) for i in range(len(atiras)): record = atiras["NO"][i] label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, label=label) ###Output _____no_output_____ ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[ db.NAME == "Halley" ] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983 axis_condition = db["A"] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db["A"] < 1.3) & (db["A"] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown Analyzing NEOs NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import StaticOrbitPlotter ###Output _____no_output_____ ###Markdown Small Body Database (SBDB) ###Code eros = Orbit.from_sbdb("Eros") eros.plot(label="Eros"); ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = Orbit.from_sbdb("1036") # Ganymed IAU number amor = Orbit.from_sbdb("2001221") # Amor SPK-ID eros = Orbit.from_sbdb("2000433") # Eros SPK-ID frame = StaticOrbitPlotter() frame.plot(ganymed, label="Ganymed") frame.plot(amor, label="Amor") frame.plot(eros, label="Eros"); ###Output _____no_output_____ ###Markdown You can use the wildcards from that browser: `*` and `?`. Keep it in mind that `from_sbdb()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code Orbit.from_sbdb("*alley") ###Output _____no_output_____ ###Markdown Note that epoch is provided by the service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale="tdb", format="jd") eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name("atira")[0] # NEO wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO. frame = StaticOrbitPlotter() frame.plot(atira, label="Atira (NEO)") frame.plot(wikipedia, label="Wikipedia (asteroid)"); ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name("1P") frame = StaticOrbitPlotter() frame.plot(halleys[0], label="Halley") frame.plot(halleys[5], label="Halley") frame.plot(halleys[10], label="Halley") frame.plot(halleys[20], label="Halley") frame.plot(halleys[-1], label="Halley"); ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](https://docs.poliastro.space/en/latest/api/safe/neos/dastcom5_parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[ :20 ] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close) With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](https://docs.poliastro.space/en/latest/api/safe/neos/dastcom5_parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983 axis_condition = ast_db["A"] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :)We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one, using the `ASTNAM` property of DASTCOM5 database: ###Code from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) frame = StaticOrbitPlotter() frame.plot(earth, label=Earth) for record in atiras["NO"]: ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, color="#666666") ###Output _____no_output_____ ###Markdown If we needed also the names of each asteroid, we could do: ###Code frame = StaticOrbitPlotter() frame.plot(earth, label=Earth) for i in range(len(atiras)): record = atiras["NO"][i] label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, label=label) ###Output _____no_output_____ ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[ db.NAME == "Halley" ] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983 axis_condition = db["A"] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db["A"] < 1.3) & (db["A"] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown Analyzing NEOs NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import StaticOrbitPlotter ###Output _____no_output_____ ###Markdown Small Body Database (SBDB) ###Code eros = Orbit.from_sbdb("Eros") eros.plot(label="Eros"); ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = Orbit.from_sbdb("1036") # Ganymed IAU number amor = Orbit.from_sbdb("2001221") # Amor SPK-ID eros = Orbit.from_sbdb("2000433") # Eros SPK-ID frame = StaticOrbitPlotter() frame.plot(ganymed, label="Ganymed") frame.plot(amor, label="Amor") frame.plot(eros, label="Eros"); ###Output _____no_output_____ ###Markdown You can use the wildcards from that browser: `*` and `?`. Keep it in mind that `from_sbdb()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code Orbit.from_sbdb("*alley") ###Output _____no_output_____ ###Markdown Note that epoch is provided by the service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale="tdb", format="jd") eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name("atira")[0] # NEO wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO. frame = StaticOrbitPlotter() frame.plot(atira, label="Atira (NEO)") frame.plot(wikipedia, label="Wikipedia (asteroid)"); ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name("1P") frame = StaticOrbitPlotter() frame.plot(halleys[0], label="Halley") frame.plot(halleys[5], label="Halley") frame.plot(halleys[10], label="Halley") frame.plot(halleys[20], label="Halley") frame.plot(halleys[-1], label="Halley"); ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](https://docs.poliastro.space/en/latest/api/safe/neos/dastcom5_parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[ :20 ] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close) With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](https://docs.poliastro.space/en/latest/dastcom5 parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983 axis_condition = ast_db["A"] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :)We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one, using the `ASTNAM` property of DASTCOM5 database: ###Code from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) frame = StaticOrbitPlotter() frame.plot(earth, label=Earth) for record in atiras["NO"]: ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, color="#666666") ###Output _____no_output_____ ###Markdown If we needed also the names of each asteroid, we could do: ###Code frame = StaticOrbitPlotter() frame.plot(earth, label=Earth) for i in range(len(atiras)): record = atiras["NO"][i] label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, label=label) ###Output _____no_output_____ ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[ db.NAME == "Halley" ] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983 axis_condition = db["A"] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db["A"] < 1.3) & (db["A"] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown Using NEOS package With the new `poliastro` version (0.7.0), a new package is included: [NEOs package](file:///C:/Users/Antonio/Desktop/Proyectos/poliastro/docs/source/html_output/api.htmlmodule-poliastro.neos).The docstrings of this package states the following:> Functions related to NEOs and different NASA APIs. All of them are coded as part of SOCIS 2017 proposal.So, first of all, an important question: What are NEOs?NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code import matplotlib.pyplot as plt plt.ion() from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import OrbitPlotter ###Output _____no_output_____ ###Markdown NeoWS moduleThis module make requests to [NASA NEO Webservice](https://api.nasa.gov/api.htmlNeoWS), so you'll need an internet connection to run the next examples.The simplest `neows` function is `orbit_from_name()`, which return an Orbit object given a name: ###Code from poliastro.neos import neows eros = neows.orbit_from_name('Eros') frame = OrbitPlotter() frame.plot(eros, label='Eros'); ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = neows.orbit_from_name('1036') # Ganymed IAU number amor = neows.orbit_from_name('2001221') # Amor SPK-ID eros = neows.orbit_from_spk_id('2000433') # Eros SPK-ID frame = OrbitPlotter() frame.plot(ganymed, label='Ganymed') frame.plot(amor, label='Amor') frame.plot(eros, label='Eros'); ###Output _____no_output_____ ###Markdown Since `neows` relies on [Small-Body Database browser](https://ssd.jpl.nasa.gov/sbdb.cgi) to get the SPK-ID given a body name, you can use the wildcards from that browser: `*` and `?`. Keep it in mind that `orbit_from_name()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code neows.orbit_from_name('*alley') ###Output _____no_output_____ ###Markdown Note that epoch is provided by the Web Service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale='tdb', format='jd') eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown Given that we are using NASA APIs, there is a maximum number of requests. If you want to make many requests, it is recommended getting a [NASA API key](https://api.nasa.gov/index.htmlapply-for-an-api-key). You can use your API key adding the `api_key` parameter to the function: ###Code neows.orbit_from_name('Toutatis', api_key='DEMO_KEY') ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name('atira')[0] # NEO wikipedia = dastcom5.orbit_from_name('wikipedia')[0] # Asteroid, but not NEO. frame = OrbitPlotter() frame.plot(atira, label='Atira (NEO)') frame.plot(wikipedia, label='Wikipedia (asteroid)'); ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name('1P') frame = OrbitPlotter() frame.plot(halleys[0], label='Halley') frame.plot(halleys[5], label='Halley') frame.plot(halleys[10], label='Halley') frame.plot(halleys[20], label='Halley') frame.plot(halleys[-1], label='Halley'); ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](http://docs.poliastro.space/en/latest/dastcom5 parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[:20] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close): With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](http://docs.poliastro.space/en/latest/dastcom5 parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db['A'] - ast_db['QR'] < 0.983 axis_condition = ast_db['A'] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :) ###Code from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) ###Output _____no_output_____ ###Markdown We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one: We are going to use `ASTNAM` property of DASTCOM5 database: ###Code frame = OrbitPlotter() frame.plot(earth, label='Earth') for record in atiras['NO']: ss = dastcom5.orbit_from_record(record) frame.plot(ss, color="#666666") ###Output _____no_output_____ ###Markdown This slightly incorrect, given that Earth coordinates are in a different frame from asteroids. However, for the purpose of this notebook, the effect is barely noticeable. If we needed also the names of each asteroid, we could do: ###Code frame = OrbitPlotter() frame.plot(earth, label='Earth') for i in range(len(atiras)): record = atiras['NO'][i] label = atiras['ASTNAM'][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record) frame.plot(ss, label=label) ###Output _____no_output_____ ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[db.NAME == 'Halley'] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db['A'] - db['QR']) < 0.983 axis_condition = db['A'] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db['A'] < 1.3) & (db['A'] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown Using NEOS package With the new `poliastro` version (0.7.0), a new package is included: [NEOs package](file:///C:/Users/Antonio/Desktop/Proyectos/poliastro/docs/source/html_output/api.htmlmodule-poliastro.neos).The docstrings of this package states the following:> Functions related to NEOs and different NASA APIs. All of them are coded as part of SOCIS 2017 proposal.So, first of all, an important question: What are NEOs?NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import OrbitPlotter2D ###Output _____no_output_____ ###Markdown NeoWS moduleThis module make requests to [NASA NEO Webservice](https://api.nasa.gov/api.htmlNeoWS), so you'll need an internet connection to run the next examples.The simplest `neows` function is `orbit_from_name()`, which return an Orbit object given a name: ###Code from poliastro.neos import neows eros = neows.orbit_from_name('Eros') frame = OrbitPlotter2D() frame.plot(eros, label='Eros') ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = neows.orbit_from_name('1036') # Ganymed IAU number amor = neows.orbit_from_name('2001221') # Amor SPK-ID eros = neows.orbit_from_spk_id('2000433') # Eros SPK-ID frame = OrbitPlotter2D() frame.plot(ganymed, label='Ganymed') frame.plot(amor, label='Amor') frame.plot(eros, label='Eros') ###Output _____no_output_____ ###Markdown Since `neows` relies on [Small-Body Database browser](https://ssd.jpl.nasa.gov/sbdb.cgi) to get the SPK-ID given a body name, you can use the wildcards from that browser: `*` and `?`. Keep it in mind that `orbit_from_name()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code neows.orbit_from_name('*alley') ###Output _____no_output_____ ###Markdown Note that epoch is provided by the Web Service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale='tdb', format='jd') eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown Given that we are using NASA APIs, there is a maximum number of requests. If you want to make many requests, it is recommended getting a [NASA API key](https://api.nasa.gov/index.htmlapply-for-an-api-key). You can use your API key adding the `api_key` parameter to the function: ###Code neows.orbit_from_name('Toutatis', api_key='DEMO_KEY') ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name('atira')[0] # NEO wikipedia = dastcom5.orbit_from_name('wikipedia')[0] # Asteroid, but not NEO. frame = OrbitPlotter2D() frame.plot(atira, label='Atira (NEO)') frame.plot(wikipedia, label='Wikipedia (asteroid)') ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name('1P') frame = OrbitPlotter2D() frame.plot(halleys[0], label='Halley') frame.plot(halleys[5], label='Halley') frame.plot(halleys[10], label='Halley') frame.plot(halleys[20], label='Halley') frame.plot(halleys[-1], label='Halley') ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](https://docs.poliastro.space/en/latest/api/safe/neos/dastcom5_parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[:20] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close): With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](https://docs.poliastro.space/en/latest/dastcom5 parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db['A'] - ast_db['QR'] < 0.983 axis_condition = ast_db['A'] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :) ###Code from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) ###Output _____no_output_____ ###Markdown We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one: We are going to use `ASTNAM` property of DASTCOM5 database: ###Code import matplotlib.pyplot as plt plt.ion() from poliastro.plotting.static import StaticOrbitPlotter frame = StaticOrbitPlotter() frame.plot(earth, label='Earth') for record in atiras['NO']: ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, color="#666666") ###Output /home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/orbit.py:608: UserWarning: Frame <class 'astropy.coordinates.builtin_frames.icrs.ICRS'> does not support 'obstime', time values were not returned ###Markdown If we needed also the names of each asteroid, we could do: ###Code frame = StaticOrbitPlotter() frame.plot(earth, label='Earth') for i in range(len(atiras)): record = atiras['NO'][i] label = atiras['ASTNAM'][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, label=label) ###Output _____no_output_____ ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[db.NAME == 'Halley'] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db['A'] - db['QR']) < 0.983 axis_condition = db['A'] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db['A'] < 1.3) & (db['A'] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown Using NEOS package With the new `poliastro` version (0.7.0), a new package is included: [NEOs package](file:///C:/Users/Antonio/Desktop/Proyectos/poliastro/docs/source/html_output/api.htmlmodule-poliastro.neos).The docstrings of this package states the following:> Functions related to NEOs and different NASA APIs. All of them are coded as part of SOCIS 2017 proposal.So, first of all, an important question: What are NEOs?NEO stands for near-Earth object. The Center for NEO Studies ([CNEOS](http://cneos.jpl.nasa.gov/)) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.And what does "near" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 108 km from the Sun. ###Code import matplotlib.pyplot as plt plt.ion() from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import OrbitPlotter ###Output _____no_output_____ ###Markdown NeoWS moduleThis module make requests to [NASA NEO Webservice](https://api.nasa.gov/api.htmlNeoWS), so you'll need an internet connection to run the next examples.The simplest `neows` function is `orbit_from_name()`, which return an Orbit object given a name: ###Code from poliastro.neos import neows eros = neows.orbit_from_name('Eros') frame = OrbitPlotter() frame.plot(eros, label='Eros'); ###Output _____no_output_____ ###Markdown You can also search by IAU number or SPK-ID (there is a faster `neows.orbit_from_spk_id()` function in that case, although): ###Code ganymed = neows.orbit_from_name('1036') # Ganymed IAU number amor = neows.orbit_from_name('2001221') # Amor SPK-ID eros = neows.orbit_from_spk_id('2000433') # Eros SPK-ID frame = OrbitPlotter() frame.plot(ganymed, label='Ganymed') frame.plot(amor, label='Amor') frame.plot(eros, label='Eros'); ###Output _____no_output_____ ###Markdown Since `neows` relies on [Small-Body Database browser](https://ssd.jpl.nasa.gov/sbdb.cgi) to get the SPK-ID given a body name, you can use the wildcards from that browser: `*` and `?`. Keep it in mind that `orbit_from_name()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies. ###Code neows.orbit_from_name('*alley') ###Output _____no_output_____ ###Markdown Note that epoch is provided by the Web Service itself, so if you need orbit on another epoch, you have to propagate it: ###Code eros.epoch.iso epoch = time.Time(2458000.0, scale='tdb', format='jd') eros_november = eros.propagate(epoch) eros_november.epoch.iso ###Output _____no_output_____ ###Markdown Given that we are using NASA APIs, there is a maximum number of requests. If you want to make many requests, it is recommended getting a [NASA API key](https://api.nasa.gov/index.htmlapply-for-an-api-key). You can use your API key adding the `api_key` parameter to the function: ###Code neows.orbit_from_name('Toutatis', api_key='DEMO_KEY') ###Output _____no_output_____ ###Markdown DASTCOM5 moduleThis module can also be used to get NEOs orbit, in the same way that `neows`, but it have some advantages (and some disadvantages).It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline. The file is a ~230 MB zip that you can manually [download](ftp://ssd.jpl.nasa.gov/pub/ssd/dastcom5.zip) and unzip in `~/.poliastro` or, more easily, you can use```Pythondastcom5.download_dastcom5()``` The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is `orbit_from_name()`: ###Code from poliastro.neos import dastcom5 atira = dastcom5.orbit_from_name('atira')[0] # NEO wikipedia = dastcom5.orbit_from_name('wikipedia')[0] # Asteroid, but not NEO. frame = OrbitPlotter() frame.plot(atira, label='Atira (NEO)') frame.plot(wikipedia, label='Wikipedia (asteroid)'); ###Output _____no_output_____ ###Markdown Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: ###Code halleys = dastcom5.orbit_from_name('1P') frame = OrbitPlotter() frame.plot(halleys[0], label='Halley') frame.plot(halleys[5], label='Halley') frame.plot(halleys[10], label='Halley') frame.plot(halleys[20], label='Halley') frame.plot(halleys[-1], label='Halley'); ###Output _____no_output_____ ###Markdown While `neows` can only be used to get Orbit objects, `dastcom5` can also provide asteroid and comet complete database.Once you have this, you can get specific data about one or more bodies. The complete databases are `ndarrays`, so if you want to know the entire list of available parameters, you can look at the `dtype`, and they are also explained in[documentation API Reference](https://docs.poliastro.space/en/latest/dastcom5 parameters.html): ###Code ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[:20] # They are more than 100, but that would be too much lines in this notebook :P ###Output _____no_output_____ ###Markdown Asteroid and comet parameters are not exactly the same (although they are very close): With these `ndarrays` you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.For example, NEOs can be grouped in several ways. One of the NEOs group is called `Atiras`, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using `asteroid_db()`: Talking in orbital terms, `Atiras` have an aphelion distance, `Q < 0.983 au` and a semi-major axis, ` a < 1.0 au`.Visiting [documentation API Reference](https://docs.poliastro.space/en/latest/dastcom5 parameters.html), you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis `Q = 2*a - q`, but there are probably many other ways. ###Code aphelion_condition = 2 * ast_db['A'] - ast_db['QR'] < 0.983 axis_condition = ast_db['A'] < 1.3 atiras = ast_db[aphelion_condition & axis_condition] ###Output _____no_output_____ ###Markdown The number of `Atira NEOs` we use using this method is: ###Code len(atiras) ###Output _____no_output_____ ###Markdown Which is consistent with the [stats published by CNEOS](https://cneos.jpl.nasa.gov/stats/totals.html) Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :) ###Code from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) ###Output _____no_output_____ ###Markdown We only need to get the 16 orbits from these 16 `ndarrays`.There are two ways:* Gather all their orbital elements manually and use the `Orbit.from_classical()` function.* Use the `NO` property (logical record number in DASTCOM5 database) and the `dastcom5.orbit_from_record()` function.The second one seems easier and it is related to the current notebook, so we are going to use that one: We are going to use `ASTNAM` property of DASTCOM5 database: ###Code frame = OrbitPlotter() frame.plot(earth, label='Earth') for record in atiras['NO']: ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, color="#666666") ###Output /home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/orbit.py:481: UserWarning: Frame <class 'astropy.coordinates.builtin_frames.icrs.ICRS'> does not support 'obstime', time values were not returned ###Markdown If we needed also the names of each asteroid, we could do: ###Code frame = OrbitPlotter() frame.plot(earth, label='Earth') for i in range(len(atiras)): record = atiras['NO'][i] label = atiras['ASTNAM'][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, label=label) ###Output /home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/orbit.py:481: UserWarning: Frame <class 'astropy.coordinates.builtin_frames.icrs.ICRS'> does not support 'obstime', time values were not returned ###Markdown We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted. Finally, another interesting function in `dastcom5` is `entire_db()`, which is really similar to `ast_db` and `com_db`, but it returns a `Pandas dataframe` instead of a `numpy ndarray`. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed: ###Code db = dastcom5.entire_db() db.columns ###Output _____no_output_____ ###Markdown Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): ###Code db[db.NAME == 'Halley'] # As you can see, Halley is the name of an asteroid too, did you know that? ###Output _____no_output_____ ###Markdown Panda offers many functionalities, and can also be used in the same way as the `ast_db` and `comet_db` functions: ###Code aphelion_condition = (2 * db['A'] - db['QR']) < 0.983 axis_condition = db['A'] < 1.3 atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____ ###Markdown What? I said they can be used in the same way! Dont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis! ###Code len(atiras[atiras.A < 0]) ###Output _____no_output_____ ###Markdown So, rewriting our condition: ###Code axis_condition = (db['A'] < 1.3) & (db['A'] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras) ###Output _____no_output_____
Programming assignment, week 4_ Ensembles/Programming_assignment_week_4.ipynb
###Markdown Version 1.0.1 Check your versions ###Code import numpy as np import pandas as pd import sklearn import scipy.sparse import lightgbm for p in [np, pd, scipy, sklearn, lightgbm]: print (p.__name__, p.__version__) ###Output numpy 1.13.1 pandas 0.20.3 scipy 0.19.1 sklearn 0.19.0 lightgbm 2.0.6 ###Markdown **Important!** There is a huge chance that the assignment will be impossible to pass if the versions of `lighgbm` and `scikit-learn` are wrong. The versions being tested: numpy 1.13.1 pandas 0.20.3 scipy 0.19.1 sklearn 0.19.0 ligthgbm 2.0.6 To install an older version of `lighgbm` you may use the following command:```pip uninstall lightgbmpip install lightgbm==2.0.6``` Ensembling In this programming assignment you are asked to implement two ensembling schemes: simple linear mix and stacking.We will spend several cells to load data and create feature matrix, you can scroll down this part or try to understand what's happening. ###Code import pandas as pd import numpy as np import gc import matplotlib.pyplot as plt %matplotlib inline pd.set_option('display.max_rows', 600) pd.set_option('display.max_columns', 50) import lightgbm as lgb from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score from tqdm import tqdm_notebook from itertools import product def downcast_dtypes(df): ''' Changes column types in the dataframe: `float64` type to `float32` `int64` type to `int32` ''' # Select columns to downcast float_cols = [c for c in df if df[c].dtype == "float64"] int_cols = [c for c in df if df[c].dtype == "int64"] # Downcast df[float_cols] = df[float_cols].astype(np.float32) df[int_cols] = df[int_cols].astype(np.int32) return df ###Output _____no_output_____ ###Markdown Load data subset Let's load the data from the hard drive first. ###Code sales = pd.read_csv('../readonly/final_project_data/sales_train.csv.gz') shops = pd.read_csv('../readonly/final_project_data/shops.csv') items = pd.read_csv('../readonly/final_project_data/items.csv') item_cats = pd.read_csv('../readonly/final_project_data/item_categories.csv') ###Output _____no_output_____ ###Markdown And use only 3 shops for simplicity. ###Code sales = sales[sales['shop_id'].isin([26, 27, 28])] ###Output _____no_output_____ ###Markdown Get a feature matrix We now need to prepare the features. This part is all implemented for you. ###Code # Create "grid" with columns index_cols = ['shop_id', 'item_id', 'date_block_num'] # For every month we create a grid from all shops/items combinations from that month grid = [] for block_num in sales['date_block_num'].unique(): cur_shops = sales.loc[sales['date_block_num'] == block_num, 'shop_id'].unique() cur_items = sales.loc[sales['date_block_num'] == block_num, 'item_id'].unique() grid.append(np.array(list(product(*[cur_shops, cur_items, [block_num]])),dtype='int32')) # Turn the grid into a dataframe grid = pd.DataFrame(np.vstack(grid), columns = index_cols,dtype=np.int32) # Groupby data to get shop-item-month aggregates gb = sales.groupby(index_cols,as_index=False).agg({'item_cnt_day':{'target':'sum'}}) # Fix column names gb.columns = [col[0] if col[-1]=='' else col[-1] for col in gb.columns.values] # Join it to the grid all_data = pd.merge(grid, gb, how='left', on=index_cols).fillna(0) # Same as above but with shop-month aggregates gb = sales.groupby(['shop_id', 'date_block_num'],as_index=False).agg({'item_cnt_day':{'target_shop':'sum'}}) gb.columns = [col[0] if col[-1]=='' else col[-1] for col in gb.columns.values] all_data = pd.merge(all_data, gb, how='left', on=['shop_id', 'date_block_num']).fillna(0) # Same as above but with item-month aggregates gb = sales.groupby(['item_id', 'date_block_num'],as_index=False).agg({'item_cnt_day':{'target_item':'sum'}}) gb.columns = [col[0] if col[-1] == '' else col[-1] for col in gb.columns.values] all_data = pd.merge(all_data, gb, how='left', on=['item_id', 'date_block_num']).fillna(0) # Downcast dtypes from 64 to 32 bit to save memory all_data = downcast_dtypes(all_data) del grid, gb gc.collect(); ###Output _____no_output_____ ###Markdown After creating a grid, we can calculate some features. We will use lags from [1, 2, 3, 4, 5, 12] months ago. ###Code # List of columns that we will use to create lags cols_to_rename = list(all_data.columns.difference(index_cols)) shift_range = [1, 2, 3, 4, 5, 12] for month_shift in tqdm_notebook(shift_range): train_shift = all_data[index_cols + cols_to_rename].copy() train_shift['date_block_num'] = train_shift['date_block_num'] + month_shift foo = lambda x: '{}_lag_{}'.format(x, month_shift) if x in cols_to_rename else x train_shift = train_shift.rename(columns=foo) all_data = pd.merge(all_data, train_shift, on=index_cols, how='left').fillna(0) del train_shift # Don't use old data from year 2013 all_data = all_data[all_data['date_block_num'] >= 12] # List of all lagged features fit_cols = [col for col in all_data.columns if col[-1] in [str(item) for item in shift_range]] # We will drop these at fitting stage to_drop_cols = list(set(list(all_data.columns)) - (set(fit_cols)|set(index_cols))) + ['date_block_num'] # Category for each item item_category_mapping = items[['item_id','item_category_id']].drop_duplicates() all_data = pd.merge(all_data, item_category_mapping, how='left', on='item_id') all_data = downcast_dtypes(all_data) gc.collect(); ###Output _____no_output_____ ###Markdown To this end, we've created a feature matrix. It is stored in `all_data` variable. Take a look: ###Code all_data.head(5) ###Output _____no_output_____ ###Markdown Train/test split For a sake of the programming assignment, let's artificially split the data into train and test. We will treat last month data as the test set. ###Code # Save `date_block_num`, as we can't use them as features, but will need them to split the dataset into parts dates = all_data['date_block_num'] last_block = dates.max() print('Test `date_block_num` is %d' % last_block) dates_train = dates[dates < last_block] dates_test = dates[dates == last_block] X_train = all_data.loc[dates < last_block].drop(to_drop_cols, axis=1) X_test = all_data.loc[dates == last_block].drop(to_drop_cols, axis=1) y_train = all_data.loc[dates < last_block, 'target'].values y_test = all_data.loc[dates == last_block, 'target'].values ###Output _____no_output_____ ###Markdown First level models You need to implement a basic stacking scheme. We have a time component here, so we will use ***scheme f)*** from the reading material. Recall, that we always use first level models to build two datasets: test meta-features and 2-nd level train-metafetures. Let's see how we get test meta-features first. Test meta-features Firts, we will run *linear regression* on numeric columns and get predictions for the last month. ###Code lr = LinearRegression() lr.fit(X_train.values, y_train) pred_lr = lr.predict(X_test.values) print('Test R-squared for linreg is %f' % r2_score(y_test, pred_lr)) ###Output _____no_output_____ ###Markdown And the we run *LightGBM*. ###Code lgb_params = { 'feature_fraction': 0.75, 'metric': 'rmse', 'nthread':1, 'min_data_in_leaf': 2**7, 'bagging_fraction': 0.75, 'learning_rate': 0.03, 'objective': 'mse', 'bagging_seed': 2**7, 'num_leaves': 2**7, 'bagging_freq':1, 'verbose':0 } model = lgb.train(lgb_params, lgb.Dataset(X_train, label=y_train), 100) pred_lgb = model.predict(X_test) print('Test R-squared for LightGBM is %f' % r2_score(y_test, pred_lgb)) ###Output _____no_output_____ ###Markdown Finally, concatenate test predictions to get test meta-features. ###Code X_test_level2 = np.c_[pred_lr, pred_lgb] ###Output _____no_output_____ ###Markdown Train meta-features **Now it is your turn to write the code**. You need to implement ***scheme f)*** from the reading material. Here, we will use duration **T** equal to month and **M=15**. That is, you need to get predictions (meta-features) from *linear regression* and *LightGBM* for months 27, 28, 29, 30, 31, 32. Use the same parameters as in above models. ###Code dates_train_level2 = dates_train[dates_train.isin([27, 28, 29, 30, 31, 32])] # That is how we get target for the 2nd level dataset y_train_level2 = y_train[dates_train.isin([27, 28, 29, 30, 31, 32])] # And here we create 2nd level feeature matrix, init it with zeros first X_train_level2 = np.zeros([y_train_level2.shape[0], 2]) # Now fill `X_train_level2` with metafeatures for cur_block_num in [27, 28, 29, 30, 31, 32]: print(cur_block_num) ''' 1. Split `X_train` into parts Remember, that corresponding dates are stored in `dates_train` 2. Fit linear regression 3. Fit LightGBM and put predictions 4. Store predictions from 2. and 3. in the right place of `X_train_level2`. You can use `dates_train_level2` for it Make sure the order of the meta-features is the same as in `X_test_level2` ''' # YOUR CODE GOES HERE # Sanity check assert np.all(np.isclose(X_train_level2.mean(axis=0), [ 1.50148988, 1.38811989])) ###Output _____no_output_____ ###Markdown Remember, the ensembles work best, when first level models are diverse. We can qualitatively analyze the diversity by examinig *scatter plot* between the two metafeatures. Plot the scatter plot below. ###Code # YOUR CODE GOES HERE ###Output _____no_output_____ ###Markdown Ensembling Now, when the meta-features are created, we can ensemble our first level models. Simple convex mix Let's start with simple linear convex mix:$$mix= \alpha\cdot\text{linreg_prediction}+(1-\alpha)\cdot\text{lgb_prediction}$$We need to find an optimal $\alpha$. And it is very easy, as it is feasible to do grid search. Next, find the optimal $\alpha$ out of `alphas_to_try` array. Remember, that you need to use train meta-features (not test) when searching for $\alpha$. ###Code alphas_to_try = np.linspace(0, 1, 1001) # YOUR CODE GOES HERE best_alpha = # YOUR CODE GOES HERE r2_train_simple_mix = # YOUR CODE GOES HERE print('Best alpha: %f; Corresponding r2 score on train: %f' % (best_alpha, r2_train_simple_mix)) ###Output _____no_output_____ ###Markdown Now use the $\alpha$ you've found to compute predictions for the test set ###Code test_preds = # YOUR CODE GOES HERE r2_test_simple_mix = # YOUR CODE GOES HERE print('Test R-squared for simple mix is %f' % r2_test_simple_mix) ###Output _____no_output_____ ###Markdown Stacking Now, we will try a more advanced ensembling technique. Fit a linear regression model to the meta-features. Use the same parameters as in the model above. ###Code # YOUR CODE GOES HERE ###Output _____no_output_____ ###Markdown Compute R-squared on the train and test sets. ###Code train_preds = # YOUR CODE GOES HERE r2_train_stacking = # YOUR CODE GOES HERE test_preds = # YOUR CODE GOES HERE r2_test_stacking = # YOUR CODE GOES HERE print('Train R-squared for stacking is %f' % r2_train_stacking) print('Test R-squared for stacking is %f' % r2_test_stacking) ###Output _____no_output_____ ###Markdown Interesting, that the score turned out to be lower than in previous method. Although the model is very simple (just 3 parameters) and, in fact, mixes predictions linearly, it looks like it managed to overfit. **Examine and compare** train and test scores for the two methods. And of course this particular case does not mean simple mix is always better than stacking. We all done! Submit everything we need to the grader now. ###Code from grader import Grader grader = Grader() grader.submit_tag('best_alpha', best_alpha) grader.submit_tag('r2_train_simple_mix', r2_train_simple_mix) grader.submit_tag('r2_test_simple_mix', r2_test_simple_mix) grader.submit_tag('r2_train_stacking', r2_train_stacking) grader.submit_tag('r2_test_stacking', r2_test_stacking) STUDENT_EMAIL = # EMAIL HERE STUDENT_TOKEN = # TOKEN HERE grader.status() grader.submit(STUDENT_EMAIL, STUDENT_TOKEN) ###Output _____no_output_____
complaints_py.ipynb
###Markdown ###Code from google.colab import files files.upload() !pip install -q kaggle !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/ !kaggle datasets list # coding: utf-8 # # Multiclass Classification For User Complaints in Automotive # # ## Introduction # This is an NLP-based problem solving approach for the dataset available at http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html #domain - automotive import nltk import pickle import gensim import pandas as pd from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from nltk.corpus import wordnet as wn from stop_words import get_stop_words import re, sys, math, string import calendar as cal import numpy as np from ast import literal_eval import logging from gensim.models import word2vec #from textblob import TextBlob from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score, confusion_matrix, roc_curve import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") from sklearn.model_selection import train_test_split from sklearn.utils import shuffle from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.svm import LinearSVC from sklearn.calibration import CalibratedClassifierCV from keras.layers import Embedding from keras.layers import Input, Dense, Embedding, Conv2D, MaxPooling2D, Dropout,concatenate from keras.layers.core import Reshape, Flatten from keras.callbacks import EarlyStopping from keras.optimizers import Adam from keras.models import Model from keras import regularizers from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils import to_categorical from gensim.models.keyedvectors import KeyedVectors from sklearn.naive_bayes import MultinomialNB from sklearn import metrics import altair as alt import seaborn as sns from sklearn.feature_extraction.text import TfidfVectorizer from numpy import array main_df = pd.read_csv('data/Consumer_Complaints.csv') stplist = ['title', 'body', 'xxxx'] english_stopwords = get_stop_words(language='english') english_stopwords += stplist english_stopwords = list(set(english_stopwords)) def get_wordnet_pos(word): """ Function that determines the the Part-of-speech (POS) tag. Acts as input to lemmatizer. Result is of the form: [('complaint', 'NN'), ... ] """ if word.startswith('N'): return wn.NOUN elif word.startswith('V'): return wn.VERB elif word.startswith('J'): return wn.ADJ elif word.startswith('R'): return wn.ADV else: return wn.NOUN def clean_up(text): """ Function to clean data. Steps: - Removing special characters, numbers - Lemmatization - Stop-words removal - Getting a unique list of words - TODO: try removing names and company names like Navient (Proper nouns) """ #lemma = WordNetLemmatizer() lemmatizer = nltk.WordNetLemmatizer().lemmatize text = re.sub('\W+', ' ', str(text)) text = re.sub(r'[0-9]+', '', text.lower()) # correcting spellings of words using TextBlob - user complaints are bound to have spelling mistakes # However, this idea was later dropped because TextBlob may change the words. # text = TextBlob(text).correct() word_pos = nltk.pos_tag(nltk.word_tokenize(text)) normalized_text_lst = [lemmatizer(x[0], get_wordnet_pos(x[1])).lower() for x in word_pos] stop_words_free = [i for i in normalized_text_lst if i not in english_stopwords and len(i) > 3] stop_words_free = list(set(stop_words_free)) return(stop_words_free) def get_average_word2vec(complaints_lst, model, num_features=300): """ Function to average the vectors in a list. Say a list contains 'flower' and 'leaf'. Then this function gives - model[flower] + model[leaf]/2 - index2words gets the list of words in the model. - Gets the list of words that are contained in index2words (vectorized_lst) and the number of those words (nwords). - Gets the average using these two and numpy. """ #complaint_feature_vecs = np.zeros((len(complaints_lst),num_features), dtype="float32") #?used? index2word_set = set(model.wv.index2word) vectorized_lst = [] vectorized_lst = [model[word] if word in index2word_set else np.zeros(num_features) for word in complaints_lst] nwords = len(vectorized_lst) summed = np.sum(vectorized_lst, axis=0) averaged_vector = np.divide(summed, nwords) return averaged_vector # ----------------------------------------------------------------------------------------------------------------- # ## Technique 2: Word2Vec # I tried creating my own model for Word2Vec. However, this only contained 17million words, as opposed to Google's GoogleNews' pretrained Word2Vec model. So, I chose to go ahead with the pre-trained model. # In lieu of time, I couldn't do this - but I would have preferred to complement the Google Word2Vec model with words from this dataset. This Word2Vec model is up until 2013, post which slang/other important words might have been introduced in the vocabulary. # Of course, these words could also be company-complaint specific. For example, for ATB Bank, someone might be using ATB bank or a specific Policy name like ATBUltraInsurance. These would also be removed. # Apart from this, these complaints contain a lot of spelling mistakes and words joined together. Such as: `immeditalely`, `demaging`, `practiciing`, etc. (shown as missing_words in the cells below), and two words joined together into one word, such as 'givenrequesting'. # I tried looking into it and found out about a library called TextBlob. However, people also warned against its used because it might not always be right. So I chose to not use it and skip over these words for now. # There were also short forms not detected by the model. # Creating a Word2Vec model using training set vocabulary_of_all_words = input_df['complaint'].tolist() num_features = 300 min_word_count = 10 num_workers = 8 context = 10 # Context window size downsampling = 1e-3 # Downsampling for frequent words word2vec_model_name = "trained_models/300features_10minwords_10context1" word2vec_complaints = word2vec.Word2Vec(vocabulary_of_all_words, workers=num_workers, size=num_features, min_count=min_word_count, window=context, sample=downsampling) word2vec_complaints.save(word2vec_model_name) # Fetching trained model to save time. word2vec_complaints = gensim.models.Word2Vec.load(word2vec_model_name) vocab_lst_flat = [item for sublist in vocabulary_of_all_words for item in sublist] vocab_lst_flat = list(set(vocab_lst_flat)) # Loading a pre-trained GoogleNews model # word2vec_model = KeyedVectors.load_word2vec_format("trained_models/GoogleNews-vectors-negative300.bin", binary=True) # Exploring this model to see how well it has trained and checking for spelling mistakes in user-complaints try: word2vec_complaints.wv.most_similar("good") except KeyError: print("Sorry, this word doesn't exist in the vocabulary.") words_not_present = 0 words_present = 0 total_unique_tokens = len(set(vocab_lst_flat)) missing_words = [] for i in vocab_lst_flat: try: p = word2vec_complaints[i] words_present+=1 except KeyError: missing_words.append(i) words_not_present+=1 print(words_present, words_not_present, total_unique_tokens) # Examples of spelling mistakes, grammatical errors, etc. print(missing_words[:20]) # #### Choosing a Word2Vec Model # - The Google word2vec model isn't able to account for a lot of words. It can be made better by retraining on more words from the training set. However, a lot of these words are spelling mistakes. # - The presence of 'xxxx', 'xx', etc. in various forms is a simple fix which can also be implemented. # - Initially, I had planned to use Google's pretrained Word2Vec model. However, after waiting for hours for training on Google word2vec model, I switched back to the Word2Vec model for want of speed. # # These take a very long time to be averaged. Commenting this code and reading from file the saved output. # embeddings_df = input_df['complaint'].apply(lambda complaint: get_average_word2vec(complaint, word2vec_complaints, # num_features)).to_frame() # col_lst = [] # for i in range(num_features): # col_lst.append('vec_'+str(i+1)) # # Easy to write to file and process when exploded into columns # exploded_em_df = pd.DataFrame(embeddings_df.complaint.tolist(), columns=col_lst) # exploded_em_df = pd.DataFrame(embeddings_df)['complaint'].apply(pd.Series) # exploded_em_df.head() # exploded_em_df.to_csv("data/modified/vocab_trained_word2Vec.csv", index=False) exploded_em_df = pd.read_csv('data/modified/vocab_trained_word2Vec.csv') print("Word2Vec output:\n") exploded_em_df.head() input_df = input_df.reset_index(drop=True) vectorized_df = pd.concat([exploded_em_df, input_df[['product']]], axis=1) vectorized_df = shuffle(vectorized_df) if vectorized_df[vectorized_df.isnull().any(axis=1)].empty: res = "True" # No NaNs exist in the cleaned dataset. else: res = "False" print(res) print(vectorized_df.shape) if not res: vectorized_df[vectorized_df.isnull().any(axis=1)] vectorized_df.dropna(axis=0, how='any') print(vectorized_df.shape) # ### Training and Test Sets] vectorized_data = np.array(vectorized_df.drop('product', axis=1)) vectorized_target = np.array(vectorized_df['product']) train_x, test_x, train_y, test_y = train_test_split(vectorized_data, vectorized_target, test_size=0.3, random_state=123) # 3. Deep Neural Network - CNN: Upon reading online some discussion on this, I thought of implementing CNNs. It said - what has recently been shown to work much better and simpler than RNNs is using word vectors, pre-trained on a large corpus, as features to the neural network. RNNs were called 'slow and fickle to train'. # Model 3: CNN using Keras from keras.layers import Embedding from keras.preprocessing.text import Tokenizer NUM_WORDS = 20000 texts = train_df.complaints_untokenized products_unique = vectorized_df['product'].unique() dict_products = {} for i, complaint in enumerate(products_unique): dict_products[complaint] = i labels = vectorized_df['product'].apply(lambda x:dict_products[x]) vocab_lst_flat = [item for sublist in vocabulary_of_all_words for item in sublist] tokenizer = Tokenizer(num_words=NUM_WORDS,filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n\'', lower=True) tokenizer.fit_on_texts(texts) sequences_train = tokenizer.texts_to_sequences(texts) sequences_valid=tokenizer.texts_to_sequences(val_df.complaints_untokenized) word_index = tokenizer.word_index EMBEDDING_DIM=300 vocabulary_size=min(len(word_index) + 1, NUM_WORDS) embedding_layer = Embedding(vocabulary_size, EMBEDDING_DIM) train_df = train_df.drop(val_df.index) size_train = len(train_x) size_test = len(test_x) output_labels_unique = np.asarray(sorted(list(set(labels)))) X_train = pad_sequences(sequences_train) X_val = pad_sequences(sequences_valid,maxlen=X_train.shape[1]) #test # convert into dummy representation of the output labels y_train = to_categorical(np.asarray(labels[train_df.index])) y_val = to_categorical(np.asarray(labels[val_df.index])) sequence_length = X_train.shape[1] filter_sizes = [3,4,5] num_filters = 100 drop = 0.5 output_dim = len(products_unique) print('Shape of X train and X test tensors:', X_train.shape, X_val.shape) print('Shape of label train and test tensors:', y_train.shape, y_val.shape) inputs = Input(shape=(sequence_length,)) embedding = embedding_layer(inputs) reshape = Reshape((sequence_length, EMBEDDING_DIM, 1))(embedding) conv_0 = Conv2D(num_filters, (filter_sizes[0], EMBEDDING_DIM), activation='relu', kernel_regularizer=regularizers.l2(0.01))(reshape) conv_1 = Conv2D(num_filters, (filter_sizes[1], EMBEDDING_DIM), activation='relu', kernel_regularizer=regularizers.l2(0.01))(reshape) conv_2 = Conv2D(num_filters, (filter_sizes[2], EMBEDDING_DIM), activation='relu', kernel_regularizer=regularizers.l2(0.01))(reshape) maxpool_0 = MaxPooling2D((sequence_length - filter_sizes[0] + 1, 1), strides=(1,1))(conv_0) maxpool_1 = MaxPooling2D((sequence_length - filter_sizes[1] + 1, 1), strides=(1,1))(conv_1) maxpool_2 = MaxPooling2D((sequence_length - filter_sizes[2] + 1, 1), strides=(1,1))(conv_2) merged_tensor = concatenate([maxpool_0, maxpool_1, maxpool_2], axis=1) flatten = Flatten()(merged_tensor) reshape = Reshape((3*num_filters,))(flatten) dropout = Dropout(drop)(flatten) output = Dense(units=output_dim, activation='softmax', kernel_regularizer=regularizers.l2(0.01))(dropout) cnn_model = Model(inputs, output) adam = Adam(lr=1e-3) cnn_model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['acc']) callbacks = [EarlyStopping(monitor='val_loss')] cnn_model.fit(X_train, y_train, batch_size=1000, epochs=10, verbose=1, validation_data=(X_val, y_val), callbacks=callbacks) # Predicting on the test set sequences_test = test_x X_test = pad_sequences(sequences_test, maxlen=X_train.shape[1]) cnn_preds = cnn_model.predict(X_test) print("Predictions from CNN completed.") cnn_results = pd.DataFrame(data={"actual_label":test_y, "predicted_label":cnn_preds}) # Accuracy: wherever the labels were correctly predicted. cnn_results['correctly_predicted'] = np.where(cnn_results['actual_label'] == cnn_results['predicted_label'], 1, 0) cnn_accuracy = (naive_results['correctly_predicted'].sum()/cnn_results.shape[0])*100 print("Accuracy of the CNN Model is: {0:.2f}.".format(cnn_accuracy)) # ----------------------------------------------------------------------------------------------------------------- # ## Conclusion # # - The model that performed best was: CNN with SQUAD liek pre-training. It gave an accuracy measure of: 75.30%. This was obtained with the word2Vec model made out of the the training set. Further, # the gensim word model was used to create the sentence level representations of the consumer complaints post the pre-training # ----------------------------------------------------------------------------------------------------------------- ###Output _____no_output_____
day3/notebooks/lgde-spark-core-1-basic-answer.ipynb
###Markdown 1교시 스파크 기본 명령어> 스파크의 기본 명령어와 구조에 대해 이해합니다 목차* [1. 스파크를 통한 CSV 파일 읽기](1.-스파크를-통한-CSV-파일-읽기)* [2. 스파크의 2가지 프로그래밍 방식 비교](2.-스파크의-2가지-프로그래밍-방식-비교)* [3. 스파크를 통한 JSON 파일 읽기](3.-스파크를-통한-JSON-파일-읽기)* [4. 뷰 테이블 생성 및 조회](4.-뷰-테이블-생성-및-조회)* [5. 스파크 애플리케이션의 개념 이해](5.-스파크-애플리케이션의-개념-이해)* [6. 스파크 UI](6.-스파크-UI)* [7. M&M 초콜렛 분류 예제](7.-M&M-초콜렛-분류-예제)* [8. 실습문제](8.-실습문제)* [참고자료](참고자료) 1. 스파크를 통한 CSV 파일 읽기> Spark 3.0.1 버전을 기준으로 작성되었습니다. 스파크는 2.0 버전으로 업데이트 되면서 DataFrames 은 Datasets 으로 통합되었고, 기존의 RDD 에서 사용하던 연산 및 기능과 DataFrame 에서 사용하던 것 모두 사용할 수 있습니다. 스파크 데이터 모델은 RDD (Spark1.0) —> Dataframe(Spark1.3) —> Dataset(Spark1.6) 형태로 업그레이드 되었으나, 본문에서 일부 DataFrames 와 DataSets 가 거의 유사하여, 일부 혼용되어 사용되는 경우가 있을 수 있습니다. ###Code from pyspark.sql import * from pyspark.sql.functions import * from pyspark.sql.types import * from IPython.display import display, display_pretty, clear_output, JSON spark = ( SparkSession .builder .config("spark.sql.session.timeZone", "Asia/Seoul") .getOrCreate() ) # 노트북에서 테이블 형태로 데이터 프레임 출력을 위한 설정을 합니다 spark.conf.set("spark.sql.repl.eagerEval.enabled", True) # display enabled spark.conf.set("spark.sql.repl.eagerEval.truncate", 100) # display output columns size # !which python !/opt/conda/bin/python --version print("spark.version: {}".format((spark.version))) spark ###Output Python 3.8.6 spark.version: 3.0.1 ###Markdown 스파크 사용 관련 팁 여러 줄의 코드 작성* python 코드의 경우 괄호로 (python) 묶으면 이스케이핑(\) 하지 않아도 됩니다* sql 문의 경우 """sql""" 으로 묶으면 이스케이핑(\)하지 않아도 됩니다 데이터 출력 함수* DataFrame.show() - 기본 제공 함수이며, show(n=limit) 통하여 최대 출력을 직접 조정할 수 있으나, 한글 출력 시에 표가 깨지는 경우가 있습니다* display(DataFrame) - Ipython 함수이며, limit 출력을 위해서는 limit 를 걸어야 하지만, 한글 출력에도 깨지지 않습니다 ###Code ## 파이썬 코드 여러 줄 작성 json = ( spark .read .json("data/tmp/simple.json") .limit(2) ) ## 스파크 SQL 여러 줄 작성 json.createOrReplaceTempView("simple") spark.sql(""" select * from simple """) json.printSchema() emp_id = json.select("emp_id") ## 표준 데이터 출력함수 json.show() emp_id.show() ## 노트북 출력함수 display(json) display(emp_id) ###Output root |-- emp_id: long (nullable = true) |-- emp_name: string (nullable = true) +------+--------+ |emp_id|emp_name| +------+--------+ | 1|엘지전자| | 2|엘지화학| +------+--------+ +------+ |emp_id| +------+ | 1| | 2| +------+ ###Markdown 컨테이너 기반 노트북> 컨테이너 내부에 존재하는 파일 등에 대해 직접 접근이 가능합니다 ###Code strings = spark.read.text("../requirements.txt") strings.show(5, truncate=False) count = strings.count() print("count of word is {}".format(count)) strings.printSchema() from pyspark.sql.dataframe import DataFrame from pyspark.sql.column import Column assert(type(strings) == DataFrame) assert(type(strings.value) == Column) # 현재 strings 데이터프레임의 스키마에 value 라는 하나의 컬럼만 존재합니다 # help(strings) # 데이터프레임은 Structured API 를 통해 Row 타입의 레코드를 다루는 함수를 이용하고 # help(strings.value) # 컬럼은 컬럼과의 비교 혹은 포함된 문자열을 다루는 contains 같은 함수를 사용합니다 ###Output _____no_output_____ ###Markdown 아무런 옵션을 주지 않는 경우 스파크가 알아서 컬럼 이름과 데이터 타입을 (string) 지정합니다 ###Code log_access = spark.read.csv("data/log_access.csv") log_access.printSchema() log_access.show() ###Output root |-- _c0: string (nullable = true) |-- _c1: string (nullable = true) |-- _c2: string (nullable = true) +----------+-----+------+ | _c0| _c1| _c2| +----------+-----+------+ | a_time|a_uid| a_id| |1603645200| 1| login| |1603647200| 1|logout| |1603649200| 2| login| |1603650200| 2|logout| |1603653200| 2| login| |1603657200| 3| login| |1603659200| 3|logout| |1603660200| 4| login| |1603664200| 4|logout| |1603664500| 4| login| |1603666500| 5| login| |1603669500| 5|logout| |1603670500| 6| login| |1603673500| 7| login| |1603674500| 8| login| |1603675500| 9| login| +----------+-----+------+ ###Markdown 첫 번째 라인에 헤더가 포함되어 있는 경우 아래와 같이 header option 을 지정하면 컬럼 명을 가져올 수 있습니다 ###Code log_access = spark.read.option("header", "true").csv("data/log_access.csv") log_access.printSchema() log_access.show() ###Output root |-- a_time: string (nullable = true) |-- a_uid: string (nullable = true) |-- a_id: string (nullable = true) +----------+-----+------+ | a_time|a_uid| a_id| +----------+-----+------+ |1603645200| 1| login| |1603647200| 1|logout| |1603649200| 2| login| |1603650200| 2|logout| |1603653200| 2| login| |1603657200| 3| login| |1603659200| 3|logout| |1603660200| 4| login| |1603664200| 4|logout| |1603664500| 4| login| |1603666500| 5| login| |1603669500| 5|logout| |1603670500| 6| login| |1603673500| 7| login| |1603674500| 8| login| |1603675500| 9| login| +----------+-----+------+ ###Markdown inferSchema 옵션으로 데이터 값을 확인하고 스파크가 데이터 타입을 추정하게 할 수 있습니다 ###Code log_access = spark.read.option("header", "true").option("inferSchema", "true").csv("data/log_access.csv") log_access.printSchema() log_access.show() ###Output root |-- a_time: integer (nullable = true) |-- a_uid: integer (nullable = true) |-- a_id: string (nullable = true) +----------+-----+------+ | a_time|a_uid| a_id| +----------+-----+------+ |1603645200| 1| login| |1603647200| 1|logout| |1603649200| 2| login| |1603650200| 2|logout| |1603653200| 2| login| |1603657200| 3| login| |1603659200| 3|logout| |1603660200| 4| login| |1603664200| 4|logout| |1603664500| 4| login| |1603666500| 5| login| |1603669500| 5|logout| |1603670500| 6| login| |1603673500| 7| login| |1603674500| 8| login| |1603675500| 9| login| +----------+-----+------+ ###Markdown 1. [기본] "data/flight-data/csv/2010-summary.csv" 파일의 스키마와 데이터 10건을 출력하세요[정답] 출력 결과 확인 > 아래와 유사하게 방식으로 작성 되었다면 정답입니다```pythondf1 = ( spark .read .option("header", "true") .option("inferSchema", "true") .csv("data/flight-data/csv/2010-summary.csv")) df1.printSchema()df1.show(10)``` ###Code # 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter) df1 = ( spark .read .option("header", "true") .option("inferSchema", "true") .csv("data/flight-data/csv/2010-summary.csv") ) df1.printSchema() df1.show(10) ###Output root |-- DEST_COUNTRY_NAME: string (nullable = true) |-- ORIGIN_COUNTRY_NAME: string (nullable = true) |-- count: integer (nullable = true) +-----------------+-------------------+-----+ |DEST_COUNTRY_NAME|ORIGIN_COUNTRY_NAME|count| +-----------------+-------------------+-----+ | United States| Romania| 1| | United States| Ireland| 264| | United States| India| 69| | Egypt| United States| 24| |Equatorial Guinea| United States| 1| | United States| Singapore| 25| | United States| Grenada| 54| | Costa Rica| United States| 477| | Senegal| United States| 29| | United States| Marshall Islands| 44| +-----------------+-------------------+-----+ only showing top 10 rows ###Markdown 2. 스파크의 2가지 프로그래밍 방식 비교 하나. 구조화된 API 호출을 통해 데이터를 출력하는 방법> 출력 시에 bigint 값인 날짜는 아래와 같이 from_unixtime 및 to_timestamp 함수를 통해 변환할 수 있습니다. ###Code from pyspark.sql.functions import unix_timestamp, from_unixtime, to_timestamp, to_date, col, lit df = spark.read.option("inferSchema", "true").json("data/activity-data") # 구조화된 API 를 통한 구문 timestamp = df.select( "Arrival_Time", to_timestamp(from_unixtime(col('Arrival_Time') / lit(1000)), 'yyyy-MM-dd HH:mm:ss').alias('String_Datetime') ) timestamp.show(5) ###Output +-------------+-------------------+ | Arrival_Time| String_Datetime| +-------------+-------------------+ |1424686734992|2015-02-23 19:18:54| |1424686735190|2015-02-23 19:18:55| |1424686735395|2015-02-23 19:18:55| |1424686735593|2015-02-23 19:18:55| |1424686735795|2015-02-23 19:18:55| +-------------+-------------------+ only showing top 5 rows ###Markdown 둘. 표현식 형식으로 그대로 사용하여 출력하는 방법> 컬럼(col) 혹은 함수(concat 등)를 직접 사용하는 방식을 **구조화된 API** 를 사용한다고 말하고 SQL 구문으로 표현하는 방식을 **SQL 표현식**을 사용한다고 말합니다 ###Code # SQL Expression 통한 구문 ts = df.selectExpr( "Arrival_Time", "to_timestamp(from_unixtime(Arrival_Time / 1000), 'yyyy-MM-dd HH:mm:ss') as String_Datetime" ) ts.show(5) ###Output +-------------+-------------------+ | Arrival_Time| String_Datetime| +-------------+-------------------+ |1424686734992|2015-02-23 19:18:54| |1424686735190|2015-02-23 19:18:55| |1424686735395|2015-02-23 19:18:55| |1424686735593|2015-02-23 19:18:55| |1424686735795|2015-02-23 19:18:55| +-------------+-------------------+ only showing top 5 rows ###Markdown 2. [기본] "data/activity-data" 경로에 저장된 json 파일을 읽고 1. 스키마를 출력하세요 2. 10건의 데이터를 출력하세요 3. 'Creation_Time' 컬럼을 년월일 포맷으로 'String_Creation_Date' 컬럼을 출력하세요> 단, Creation_Time 은 Arrival_Time 과 정밀도가 달라서 1000 이 아니라 `1000000000` 을 나누어 주어야 합니다[실습2] 출력 결과 확인 > 아래와 유사하게 방식으로 작성 되었다면 정답입니다```pythondf2 = ( spark .read .option("header", "true") .option("inferSchema", "true") .json("data/activity-data")) df2.printSchema()display(df2.limit(3))answer = df2.limit(3).selectExpr( "Creation_Time", "to_timestamp(from_unixtime(Creation_Time / 1000000000), 'yyyy-MM-dd HH:mm:ss') as String_Creation_Date")answer.show(10)``` ###Code # 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter) df2 = ( spark .read .option("header", "true") .option("inferSchema", "true") .json("data/activity-data") ) df2.printSchema() display(df2.limit(3)) answer = df2.limit(3).selectExpr( "Creation_Time", "to_timestamp(from_unixtime(Creation_Time / 1000000000), 'yyyy-MM-dd HH:mm:ss') as String_Creation_Date" ) answer.show(10) ###Output root |-- Arrival_Time: long (nullable = true) |-- Creation_Time: long (nullable = true) |-- Device: string (nullable = true) |-- Index: long (nullable = true) |-- Model: string (nullable = true) |-- User: string (nullable = true) |-- gt: string (nullable = true) |-- x: double (nullable = true) |-- y: double (nullable = true) |-- z: double (nullable = true) ###Markdown Select 뿐만 아니라 filter 의 경우도 Expression 을 사용할 수 있습니다 ###Code df.filter(col("index") > 100).select("index", "user").groupBy("user").count().show() # 대부분의 구문에서 표현식을 통해 처리할 수 있도록 내부적으로 2가지 방식에 대해 모두 구현되어 있습니다. df.filter("index > 100").select("index", "user").groupBy("user").count().show() ###Output +----+-----+ |user|count| +----+-----+ | g|91650| | f|92030| | e|96000| | h|77300| | d|81220| | c|77130| | i|92530| | b|91210| | a|80824| +----+-----+ +----+-----+ |user|count| +----+-----+ | g|91650| | f|92030| | e|96000| | h|77300| | d|81220| | c|77130| | i|92530| | b|91210| | a|80824| +----+-----+ ###Markdown 3. 스파크를 통한 JSON 파일 읽기> 추후에 학습하게 될 예정인 filter 및 groupBy 구문이 사용되고 있는데요, 조건을 통해 데이터를 줄이고(filter), 특정 컬럼별 집계(groupBy)를 위한 연산자입니다 ###Code json = spark.read.json("data/activity-data") users = json.filter("index > 100").select("index", "user").groupBy("user").count() users.show(5) ###Output +----+-----+ |user|count| +----+-----+ | g|91650| | f|92030| | e|96000| | h|77300| | d|81220| +----+-----+ only showing top 5 rows ###Markdown 3. [기본] "data/activity-data" 경로의 JSON 데이터를 읽고 1. 스키마를 출력하세요 2. 10건의 데이터를 출력하세요 3. index 가 10000 미만의 이용자('user')별 빈도수를 구하세요[실습3] 출력 결과 확인 > 아래와 유사하게 방식으로 작성 되었다면 정답입니다```pythondf3 = ( spark .read .option("header", "true") .option("inferSchema", "true") .json("data/activity-data")) df3.printSchema()answer = df3.filter("index < 10000").groupBy("user").count()answer.show(10)``` ###Code # 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter) df3 = ( spark .read .option("header", "true") .option("inferSchema", "true") .json("data/activity-data") ) df3.printSchema() answer = df3.filter("index < 10000").groupBy("user").count() answer.show(10) ###Output root |-- Arrival_Time: long (nullable = true) |-- Creation_Time: long (nullable = true) |-- Device: string (nullable = true) |-- Index: long (nullable = true) |-- Model: string (nullable = true) |-- User: string (nullable = true) |-- gt: string (nullable = true) |-- x: double (nullable = true) |-- y: double (nullable = true) |-- z: double (nullable = true) +----+-----+ |user|count| +----+-----+ | g| 2506| | f| 2490| | e| 2501| | h| 2500| | d| 2499| | c| 2494| | i| 2500| | b| 2500| | a| 2501| +----+-----+ ###Markdown 4. 뷰 테이블 생성 및 조회> 이미 생성된 데이터 프레임을 통해서 현재 세션에서만 조회 가능한 임시 뷰 테이블을 만들어 SQL 질의가 가능합니다. ###Code users.createOrReplaceTempView("users") spark.sql("select * from users where count is not null and count > 9000 order by count desc").show(5) ###Output +----+-----+ |user|count| +----+-----+ | e|96000| | i|92530| | f|92030| | g|91650| | b|91210| +----+-----+ only showing top 5 rows ###Markdown 4. [기본] "data/flight-data/json/2015-summary.json" 경로의 JSON 데이터를 읽고 1. `2015_summary` 라는 임시 테이블을 생성하세요 2. spark sql 구문을 이용하여 10 건의 데이터를 출력하세요[실습4] 출력 결과 확인 > 아래와 유사하게 방식으로 작성 되었다면 정답입니다```pythondf4 = ( spark .read .option("header", "true") .option("inferSchema", "true") .json("data/flight-data/json/2015-summary.json")) df4.printSchema()answer = df4.createOrReplaceTempView("2015_summary")spark.sql("select * from 2015_summary").show(10)``` ###Code # 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter) df4 = ( spark .read .option("header", "true") .option("inferSchema", "true") .json("data/flight-data/json/2015-summary.json") ) df4.printSchema() answer = df4.createOrReplaceTempView("2015_summary") spark.sql("select * from 2015_summary").show(10) ###Output root |-- DEST_COUNTRY_NAME: string (nullable = true) |-- ORIGIN_COUNTRY_NAME: string (nullable = true) |-- count: long (nullable = true) +-----------------+-------------------+-----+ |DEST_COUNTRY_NAME|ORIGIN_COUNTRY_NAME|count| +-----------------+-------------------+-----+ | United States| Romania| 15| | United States| Croatia| 1| | United States| Ireland| 344| | Egypt| United States| 15| | United States| India| 62| | United States| Singapore| 1| | United States| Grenada| 62| | Costa Rica| United States| 588| | Senegal| United States| 40| | Moldova| United States| 1| +-----------------+-------------------+-----+ only showing top 10 rows ###Markdown JSON 파일을 읽는 여러가지 방법 ###Code # 스키마 확인 - 3가지 모두 동일한 결과를 얻을 수 있으며 편한 방식을 선택하시면 됩니다 df = spark.read.format("json").load("./data/flight-data/json/2015-summary.json") # 미국 교통통계국이 제공하는 항공운항 데이터 df.printSchema() df2 = spark.read.load("./data/flight-data/json/2015-summary.json", format="json") df2.printSchema() df3 = spark.read.json("./data/flight-data/json/2015-summary.json") df3.printSchema() ###Output root |-- DEST_COUNTRY_NAME: string (nullable = true) |-- ORIGIN_COUNTRY_NAME: string (nullable = true) |-- count: long (nullable = true) root |-- DEST_COUNTRY_NAME: string (nullable = true) |-- ORIGIN_COUNTRY_NAME: string (nullable = true) |-- count: long (nullable = true) root |-- DEST_COUNTRY_NAME: string (nullable = true) |-- ORIGIN_COUNTRY_NAME: string (nullable = true) |-- count: long (nullable = true) ###Markdown 5. 스파크 애플리케이션의 개념 이해 스파크에서 반드시 알아야 할 객체와 개념| 구분 | 설명 | 기타 ||---|---|---|| Application | 스파크 프레임워크를 통해 빌드한 프로그램. 전체 작업을 관리하는 Driver 와 Executors 상에서 수행되는 프로그램으로 구분합니다 | - || SparkSession | 스파크의 모든 기능을 사용하기 위해 생성하는 객체 | - || Job | 하나의 액션(save, collect 등)을 수행하기 위해 여러개의 타스크로 구성된 병렬처리 단위 | DAG 혹은 Spark Execution Plan || Stage | 하나의 잡은 다수의 스테이지라는 것으로 구성되며, 하나의 스테이지는 다수의 타스크 들로 구성됩니다 | - || Task | 스파크 익스큐터에 보내지는 하나의 작업 단위 | 하나의 Core 혹은 Partition 단위의 작업 | 스파크의 변환(Transformation)과 액션(Action)| 구분 | 설명 | 기타 ||---|---|---|| Transformation | 원본 데이터의 변경 없이 새로운 데이터프레임을 생성하는 모든 작업을 말하며 모든 변환 작업들은 lazily evaluated 되며 lineage 를 유지합니다 | select, filter, join, groupBy, orderBy || Action | 여태까지 지연된 변환 작업을 트리거링하는 동작을 말하며, 데이터의 조회 및 저장 등의 작업을 말합니다 | show, take, count, collect, save |> Lineage : 연속된 변환 파이프라인이 액션을 만나기 전까지의 모든 이력 혹은 히스토리 정보를 모두 저장하고 있는 객체를 말하며, 스파크는 이렇게 체인을 구성한 변환 작업을 통해 **쿼리 최적화**를 수행할 수 있으며, 데이터 불변성(Immutability)를 통해서 **내결함성(Fault Tolerance)**을 가질 수 있습니다. 좁은 변환과 넓은 변환 - Narrow and Wide Transformation> 위에서 언급한 최적화(Optimization) 과정은 여러 오퍼레이션들을 다시 여러 스테이지로 쪼개고, 이러한 스테이지 들이 셔플링이 필요한지, 클러스터간의 데이터 교환이 필요한지 등을 결정하는 문제이며, 변환 작업은 크게 하나의 파티션 내에 수행이 가능한 **좁은 의존성(narrow dependencies)** 과 셔플링이 발생하여 클러스터 전체에 데이터 교환이 필요한 **넓은 의존성(wide dependencies)** 두 가지로 구분합니다![Transformation](images/transformation.png) 6. 스파크 UI> Default 포트는 4040 이므로 http://localhost:4040 에 접속하여 앞에서 배웠던 Narrow, Wide Transformation DAG를 확인합니다 ###Code # Narrow Transformation strings = spark.read.text("../requirements.txt") jupyter = strings.filter(strings.value.contains("jupyter")) jupyter.show(truncate=False) # Wide Transformation user = spark.read.option("header", "true").option("inferSchema", "true").csv("data/tbl_user.csv") count = user.groupBy("u_gender").count() count.show(truncate=False) ###Output +--------+-----+ |u_gender|count| +--------+-----+ |여 |3 | |남 |6 | +--------+-----+ ###Markdown | Narrow | Wide ||---|---||![narrow](images/narrow.png)|![wide](images/wide.png)| 7. M&M 초콜렛 분류 예제 (참고용)> [Learning Spark 2nd Edition](https://github.com/psyoblade/LearningSparkV2?organization=psyoblade&organization=psyoblade) 에서 제공하는 data bricks dataset 예제 가운데 미국의 주 별 M&M 초콜렛 판매량을 집계하는 예제를 작성합니다 ###Code mnm_df = spark.read.option("header", "true").option("inferSchema", "true").csv("data/databricks/mnm_dataset.csv") mnm_df.printSchema() mnm_df.show(truncate=False) from pyspark.sql.functions import * # We use the DataFrame high-level APIs. Note # that we don't use RDDs at all. Because some of Spark's # functions return the same object, we can chain function calls. # 1. Select from the DataFrame the fields "State", "Color", and "Count" # 2. Since we want to group each state and its M&M color count, # we use groupBy() # 3. Aggregate counts of all colors and groupBy() State and Color # 4 orderBy() in descending order count_mnm_df = (mnm_df.select("State", "Color", "Count") \ .groupBy("State", "Color") \ .agg(count("Count").alias("Total")) \ .orderBy("Total", ascending=False)) # Show the resulting aggregations for all the states and colors; # a total count of each color per state. # Note show() is an action, which will trigger the above # query to be executed. count_mnm_df.show(n=60, truncate=False) print("Total Rows = %d" % (count_mnm_df.count())) # While the above code aggregated and counted for all # the states, what if we just want to see the data for # a single state, e.g., CA? # 1. Select from all rows in the DataFrame # 2. Filter only CA state # 3. groupBy() State and Color as we did above # 4. Aggregate the counts for each color # 5. orderBy() in descending order # Find the aggregate count for California by filtering ca_count_mnm_df = (mnm_df.select("State", "Color", "Count") \ .where(mnm_df.State == "CA") \ .groupBy("State", "Color") \ .agg(count("Count").alias("Total")) \ .orderBy("Total", ascending=False)) # Show the resulting aggregation for California. # As above, show() is an action that will trigger the execution of the # entire computation. ca_count_mnm_df.show(n=10, truncate=False) # Stop the SparkSession # spark.stop() ###Output +-----+------+-----+ |State|Color |Total| +-----+------+-----+ |CA |Yellow|1807 | |WA |Green |1779 | |OR |Orange|1743 | |TX |Green |1737 | |TX |Red |1725 | |CA |Green |1723 | |CO |Yellow|1721 | |CA |Brown |1718 | |CO |Green |1713 | |NV |Orange|1712 | |TX |Yellow|1703 | |NV |Green |1698 | |AZ |Brown |1698 | |CO |Blue |1695 | |WY |Green |1695 | |NM |Red |1690 | |AZ |Orange|1689 | |NM |Yellow|1688 | |NM |Brown |1687 | |UT |Orange|1684 | |NM |Green |1682 | |UT |Red |1680 | |AZ |Green |1676 | |NV |Yellow|1675 | |NV |Blue |1673 | |WA |Red |1671 | |WY |Red |1670 | |WA |Brown |1669 | |NM |Orange|1665 | |WY |Blue |1664 | |WA |Yellow|1663 | |WA |Orange|1658 | |CA |Orange|1657 | |NV |Brown |1657 | |CO |Brown |1656 | |CA |Red |1656 | |UT |Blue |1655 | |AZ |Yellow|1654 | |TX |Orange|1652 | |AZ |Red |1648 | |OR |Blue |1646 | |UT |Yellow|1645 | |OR |Red |1645 | |CO |Orange|1642 | |TX |Brown |1641 | |NM |Blue |1638 | |AZ |Blue |1636 | |OR |Green |1634 | |UT |Brown |1631 | |WY |Yellow|1626 | |WA |Blue |1625 | |CO |Red |1624 | |OR |Brown |1621 | |TX |Blue |1614 | |OR |Yellow|1614 | |NV |Red |1610 | |CA |Blue |1603 | |WY |Orange|1595 | |UT |Green |1591 | |WY |Brown |1532 | +-----+------+-----+ Total Rows = 60 +-----+------+-----+ |State|Color |Total| +-----+------+-----+ |CA |Yellow|1807 | |CA |Green |1723 | |CA |Brown |1718 | |CA |Orange|1657 | |CA |Red |1656 | |CA |Blue |1603 | +-----+------+-----+ ###Markdown 5. [기본] "data/tbl_user.csv" 경로의 CSV 데이터를 읽고 1. 스키마를 출력하세요 2. `user` 라는 임시 테이블을 생성하세요 3. spark sql 구문을 이용하여 10 건의 데이터를 출력하세요[실습5] 출력 결과 확인 > 아래와 유사하게 방식으로 작성 되었다면 정답입니다```pythondf5 = ( spark .read .option("header", "true") .option("inferSchema", "true") .csv("data/tbl_user.csv")) df5.printSchema()answer = df5.createOrReplaceTempView("user")spark.sql("select * from user").show(10)``` ###Code # 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter) df5 = ( spark .read .option("header", "true") .option("inferSchema", "true") .csv("data/tbl_user.csv") ) df5.printSchema() answer = df5.createOrReplaceTempView("user") spark.sql("select * from user").show(10) ###Output root |-- u_id: integer (nullable = true) |-- u_name: string (nullable = true) |-- u_gender: string (nullable = true) |-- u_signup: integer (nullable = true) +----+----------+--------+--------+ |u_id| u_name|u_gender|u_signup| +----+----------+--------+--------+ | 1| 정휘센| 남|19700808| | 2| 김싸이언| 남|19710201| | 3| 박트롬| 여|19951030| | 4| 청소기| 남|19770329| | 5|유코드제로| 여|20021029| | 6| 윤디오스| 남|20040101| | 7| 임모바일| 남|20040807| | 8| 조노트북| 여|20161201| | 9| 최컴퓨터| 남|20201124| +----+----------+--------+--------+ ###Markdown 6. [기본] "data/tbl_purchase.csv" 경로의 CSV 데이터를 읽고 1. 스키마를 출력하세요 2. `purchase` 라는 임시 테이블을 생성하세요 3. selectExpr 구문 혹은 spark sql 구문을 이용하여 `p_time` 필드를 날짜 함수를 이용하여 식별 가능하도록 데이터를 출력하세요[실습6] 출력 결과 확인 > 아래와 유사하게 방식으로 작성 되었다면 정답입니다```pythondf6 = ( spark .read .option("header", "true") .option("inferSchema", "true") .csv("data/tbl_purchase.csv")) df6.printSchema()answer = df6.createOrReplaceTempView("purchase")spark.sql("select from_unixtime(p_time) as p_time from purchase").show(10)``` ###Code # 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter) df6 = ( spark .read .option("header", "true") .option("inferSchema", "true") .csv("data/tbl_purchase.csv") ) df6.printSchema() answer = df6.createOrReplaceTempView("purchase") spark.sql("select from_unixtime(p_time) as p_time from purchase").show(10) ###Output root |-- p_time: integer (nullable = true) |-- p_uid: integer (nullable = true) |-- p_id: integer (nullable = true) |-- p_name: string (nullable = true) |-- p_amount: integer (nullable = true) +-------------------+ | p_time| +-------------------+ |2020-10-26 03:45:50| |2020-10-26 03:45:50| |2020-10-26 15:45:55| |2020-10-26 09:51:40| |2020-10-26 03:55:55| |2020-10-26 10:08:20| |2020-10-26 07:45:55| |2020-10-26 07:49:15| +-------------------+
ConditionalProbabilityExercise.ipynb
###Markdown Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range.It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's.It then assigns a lower probability for young people to buy stuff.In the end, we have two Python dictionaries:"totals" contains the total number of people in each age group."purchases" contains the total number of things purchased by people in each age group.The grand total of purchases is in totalPurchases, and we know the total number of people is 100,000.Let's run it and have a look: ###Code from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = random.random() totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 totals purchases totalPurchases ###Output _____no_output_____ ###Markdown Let's play with conditional probability.First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something: ###Code PEF = float(purchases[30]) / float(totals[30]) print "P(purchase | 30s): ", PEF ###Output P(purchase | 30s): 0.506007449237 ###Markdown P(F) is just the probability of being 30 in this data set: ###Code PF = float(totals[30]) / 100000.0 print "P(30's): ", PF ###Output P(30's): 0.16646 ###Markdown And P(E) is the overall probability of buying something, regardless of your age: ###Code PE = float(totalPurchases) / 100000.0 print "P(Purchase):", PE ###Output P(Purchase): 0.50005 ###Markdown If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; PE is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.)What is P(E)P(F)? ###Code print "P(30's)P(Purchase)", PE * PF ###Output P(30's)P(Purchase) 0.083238323 ###Markdown P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's: ###Code print "P(30's, Purchase)", float(purchases[30]) / 100000.0 ###Output P(30's, Purchase) 0.08423 ###Markdown P(E,F) = P(E)P(F), and they are pretty close in this example. But because E and F are actually dependent on each other, and the randomness of the data we're working with, it's not quite the same.We can also check that P(E|F) = P(E,F)/P(F) and sure enough, it is: ###Code (float(purchases[30]) / 100000.0) / PF ###Output _____no_output_____ ###Markdown Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range.It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's.It then assigns a lower probability for young people to buy stuff.In the end, we have two Python dictionaries:"totals" contains the total number of people in each age group."purchases" contains the total number of things purchased by people in each age group.The grand total of purchases is in totalPurchases, and we know the total number of people is 100,000.Let's run it and have a look: ###Code from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = float(ageDecade) / 100.0 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 totals purchases totalPurchases ###Output _____no_output_____ ###Markdown Let's play with conditional probability.First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something: ###Code PEF = float(purchases[30]) / float(totals[30]) print('P(purchase | 30s): ' + str(PEF)) ###Output P(purchase | 30s): 0.29929598652145134 ###Markdown P(F) is just the probability of being 30 in this data set: ###Code PF = float(totals[30]) / 100000.0 print("P(30's): " + str(PF)) ###Output P(30's): 0.16619 ###Markdown And P(E) is the overall probability of buying something, regardless of your age: ###Code PE = float(totalPurchases) / 100000.0 print("P(Purchase):" + str(PE)) ###Output P(Purchase):0.45012 ###Markdown If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; P(E) is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.) P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's: ###Code print("P(30's, Purchase)" + str(float(purchases[30]) / 100000.0)) ###Output P(30's, Purchase)0.04974 ###Markdown Let's also compute the product of P(E) and P(F), P(E)P(F): ###Code print("P(30's)P(Purchase)" + str(PE * PF)) ###Output P(30's)P(Purchase)0.07480544280000001 ###Markdown Something you may learn in stats is that P(E,F) = P(E)P(F), but this assumes E and F are independent. We've found here that P(E,F) is about 0.05, while P(E)P(F) is about 0.075. So when E and F are dependent - and we have a conditional probability going on - we can't just say that P(E,F) = P(E)P(F).We can also check that P(E|F) = P(E,F)/P(F), which is the relationship we showed in the slides - and sure enough, it is: ###Code print((purchases[30] / 100000.0) / PF) ###Output 0.29929598652145134 ###Markdown Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range.It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's.It then assigns a lower probability for young people to buy stuff.In the end, we have two Python dictionaries:"totals" contains the total number of people in each age group."purchases" contains the total number of things purchased by people in each age group.The grand total of purchases is in totalPurchases, and we know the total number of people is 100,000.Let's run it and have a look: ###Code from numpy import random import numpy as np random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = 0.5 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 totals purchases totalPurchases ###Output _____no_output_____ ###Markdown Let's play with conditional probability.First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something: ###Code PEF = float(purchases[30]) / float(totals[30]) print('P(purchase | 30s): ' + str(PEF)) ###Output P(purchase | 30s): 0.49900716047896987 ###Markdown P(F) is just the probability of being 30 in this data set: ###Code PF = float(totals[30]) / 100000.0 print("P(30's): " + str(PF)) ###Output P(30's): 0.16619 ###Markdown And P(E) is the overall probability of buying something, regardless of your age: ###Code PE = float(totalPurchases) / 100000.0 print("P(Purchase):" + str(PE)) ###Output P(Purchase):0.50086 ###Markdown If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; PE is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.)What is P(E)P(F)? ###Code print("P(30's)P(Purchase)" + str(PE * PF)) ###Output P(30's)P(Purchase)0.0832379234 ###Markdown P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's: ###Code print("P(30's, Purchase)" + str(float(purchases[30]) / 100000.0)) ###Output P(30's, Purchase)0.08293 ###Markdown P(E,F) = P(E)P(F), and they are pretty close in this example. But because E and F are actually dependent on each other, and the randomness of the data we're working with, it's not quite the same.We can also check that P(E|F) = P(E,F)/P(F) and sure enough, it is: ###Code print((purchases[30] / 100000.0) / PF) ###Output 0.49900716047896987 ###Markdown Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range.It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's.It then assigns a lower probability for young people to buy stuff.In the end, we have two Python dictionaries:"totals" contains the total number of people in each age group."purchases" contains the total number of things purchased by people in each age group.The grand total of purchases is in totalPurchases, and we know the total number of people is 100,000.Let's run it and have a look: ###Code from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = float(ageDecade) / 100.0 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 totals purchases totalPurchases ###Output _____no_output_____ ###Markdown Let's play with conditional probability.First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something: ###Code PEF = float(purchases[30]) / float(totals[30]) print('P(purchase | 30s): ' + str(PEF)) ###Output P(purchase | 30s): 0.29929598652145134 ###Markdown P(F) is just the probability of being 30 in this data set: ###Code PF = float(totals[30]) / 100000.0 print("P(30's): " + str(PF)) ###Output P(30's): 0.16619 ###Markdown And P(E) is the overall probability of buying something, regardless of your age: ###Code PE = float(totalPurchases) / 100000.0 print("P(Purchase):" + str(PE)) ###Output P(Purchase):0.45012 ###Markdown If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; P(E) is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.) P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's: ###Code print("P(30's, Purchase)" + str(float(purchases[30]) / 100000.0)) ###Output P(30's, Purchase)0.04974 ###Markdown Let's also compute the product of P(E) and P(F), P(E)P(F): ###Code print("P(30's)P(Purchase)" + str(PE * PF)) ###Output P(30's)P(Purchase)0.07480544280000001 ###Markdown Something you may learn in stats is that P(E,F) = P(E)P(F), but this assumes E and F are independent. We've found here that P(E,F) is about 0.05, while P(E)P(F) is about 0.075. So when E and F are dependent - and we have a conditional probability going on - we can't just say that P(E,F) = P(E)P(F).We can also check that P(E|F) = P(E,F)/P(F), which is the relationship we showed in the slides - and sure enough, it is: ###Code print((purchases[30] / 100000.0) / PF) ###Output 0.29929598652145134 ###Markdown Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range.It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's.It then assigns a lower probability for young people to buy stuff.In the end, we have two Python dictionaries:"totals" contains the total number of people in each age group."purchases" contains the total number of things purchased by people in each age group.The grand total of purchases is in totalPurchases, and we know the total number of people is 100,000.Let's run it and have a look: ###Code from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = float(ageDecade) / 100.0 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 totals purchases totalPurchases ###Output _____no_output_____ ###Markdown Let's play with conditional probability.First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something: ###Code PEF = float(purchases[30]) / float(totals[30]) print('P(purchase | 30s): ' + str(PEF)) ###Output P(purchase | 30s): 0.29929598652145134 ###Markdown P(F) is just the probability of being 30 in this data set: ###Code PF = float(totals[30]) / 100000.0 print("P(30's): " + str(PF)) ###Output P(30's): 0.16619 ###Markdown And P(E) is the overall probability of buying something, regardless of your age: ###Code PE = float(totalPurchases) / 100000.0 print("P(Purchase):" + str(PE)) ###Output P(Purchase):0.45012 ###Markdown If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; PE is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.)What is P(E)P(F)? ###Code print("P(30's)P(Purchase)" + str(PE * PF)) ###Output P(30's)P(Purchase)0.07480544280000001 ###Markdown P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's: ###Code print("P(30's, Purchase)" + str(float(purchases[30]) / 100000.0)) ###Output P(30's, Purchase)0.04974 ###Markdown P(E,F) = P(E)P(F), and they are pretty close in this example. But because E and F are actually dependent on each other, and the randomness of the data we're working with, it's not quite the same.We can also check that P(E|F) = P(E,F)/P(F) and sure enough, it is: ###Code print((purchases[30] / 100000.0) / PF) ###Output 0.29929598652145134 ###Markdown Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range.It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's.It then assigns a lower probability for young people to buy stuff.In the end, we have two Python dictionaries:"totals" contains the total number of people in each age group."purchases" contains the total number of things purchased by people in each age group.The grand total of purchases is in totalPurchases, and we know the total number of people is 100,000.Let's run it and have a look: ###Code from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = .5 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 totals purchases totalPurchases ###Output _____no_output_____ ###Markdown Let's play with conditional probability.First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something: ###Code PEF = float(purchases[30]) / float(totals[30]) print('P(purchase | 30s): ' + str(PEF)) ###Output P(purchase | 30s): 0.499007160479 ###Markdown P(F) is just the probability of being 30 in this data set: ###Code PF = float(totals[30]) / 100000.0 print("P(30's): " + str(PF)) ###Output P(30's): 0.16619 ###Markdown And P(E) is the overall probability of buying something, regardless of your age: ###Code PE = float(totalPurchases) / 100000.0 print("P(Purchase):" + str(PE)) ###Output P(Purchase):0.50086 ###Markdown If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; PE is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.)What is P(E)P(F)? ###Code print("P(30's)P(Purchase)" + str(PE * PF)) ###Output P(30's)P(Purchase)0.0832379234 ###Markdown P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's: ###Code print("P(30's, Purchase)" + str(float(purchases[30]) / 100000.0)) ###Output P(30's, Purchase)0.08293 ###Markdown P(E,F) = P(E)P(F), and they are pretty close in this example. But because E and F are actually dependent on each other, and the randomness of the data we're working with, it's not quite the same.We can also check that P(E|F) = P(E,F)/P(F) and sure enough, it is: ###Code print((purchases[30] / 100000.0) / PF) ###Output 0.499007160479
previous_final_project_examples/katepoole26/keyness_analysis.ipynb
###Markdown Keyness Analysis (R v. D) Setup ###Code import os from collections import Counter %matplotlib inline import os import re import csv import pandas as pd import numpy as np import matplotlib.pyplot as plt from nltk import tokenize from nltk.sentiment.vader import SentimentIntensityAnalyzer ## Parameters to_strip = ',.\xa0:-()\';$"/?][!`Ą@Ś§¨’–“”…ï‘>&\\%˝˘*' ## Open speeches all_speeches_r = open('data/republican_all.txt').read() all_speeches_d = open('data/democrat_all.txt').read() ###Output _____no_output_____ ###Markdown Functions ###Code %run functions.ipynb ###Output _____no_output_____ ###Markdown Get n-Gram Distributions ###Code %run n-gram_frequency_analysis.ipynb ###Output 98055 tokens in the Republican convention nominee speeches 91071 tokens in the Democratic convention nominee speeches ###Markdown Keyness Analysis ###Code calculate_keyness(word_freq_d, word_freq_r) calculate_keyness(bigram_freq_d, bigram_freq_r) calculate_keyness(trigram_freq_d, trigram_freq_r) ###Output WORD Corpus A Freq.Corpus B Freq.Keyness ============================================================ the democratic party 47 8 33.590 we can do 24 7 11.160 the people of 40 18 10.268 i want to 46 23 9.615 we're going to 41 20 9.020 of this country 21 8 7.044
notebooks/04-automate-optional.ipynb
###Markdown Run workflow in an automatic wayIn the previous notebook [03-process](03-process.ipynb), we ran through the workflow in detailed steps. For daily running routines, the current notebook provides a more succinct and automatic approach to run through the pipeline using some utility functions in the workflow. ###Code import os os.chdir('..') import numpy as np from workflow_calcium_imaging.pipeline import lab, subject, session, scan, imaging ###Output Connecting [email protected]:3306 ###Markdown Ingestion of subjects, sessions, scans+ Fill subject and session information in files `/user_data/subjects.csv` and `/user_data/sessions.csv`+ Run automatic scripts prepared in `workflow_calcium_imaging.ingest` for ingestion: + `ingest_subjects` - ingests data into subject.Subject + `ingest_sessions` - ingests data into Equipment, session.Session, session.SessionDirectory, scan.Scan ###Code from workflow_calcium_imaging.ingest import ingest_subjects, ingest_sessions ingest_subjects() ingest_sessions() ###Output ---- Insert 1 entry(s) into subject.Subject ---- ---- Successfully completed ingest_subjects ---- {'scanning_mode': 'bidirectional', 'frame_rate': 7.8125, 'num_frames': 7530, 'num_channels': 1, 'num_planes': 4, 'frame_size': array([512, 796], dtype=uint16), 'num_target_frames': 0, 'num_stored_frames': 30123, 'stage_pos': [0, 0, -311.71], 'stage_angle': 9.65, 'etl_pos': [203, 255, 314, 379], 'filename': 'run00_orientation_8dir_000_000.sbx', 'resonant_freq': 8000, 'scanbox_version': 3, 'records_per_buffer': 256, 'magnification': 1.7, 'um_per_pixel_x': nan, 'um_per_pixel_y': nan, 'objective': 'Nikon_16x_dlr', 'messages': array([], dtype=object), 'event_id': array([], dtype=uint8), 'usernotes': array([], dtype='<U1'), 'ballmotion': array([], dtype='<U1')} ---- Insert 1 entry(s) into experiment.Equipment ---- ---- Insert 1 entry(s) into session.Session ---- ---- Insert 1 entry(s) into scan.Scan ---- ---- Successfully completed ingest_sessions ---- ###Markdown (Optional) Insert new ProcessingParamSet for Suite2p or CaImAn+ This is not needed if you are using an existing ProcessingParamSet. ###Code params_suite2p = {'look_one_level_down': 0.0, 'fast_disk': [], 'delete_bin': False, 'mesoscan': False, 'h5py': [], 'h5py_key': 'data', 'save_path0': [], 'subfolders': [], 'nplanes': 1, 'nchannels': 1, 'functional_chan': 1, 'tau': 1.0, 'fs': 10.0, 'force_sktiff': False, 'preclassify': 0.0, 'save_mat': False, 'combined': True, 'aspect': 1.0, 'do_bidiphase': False, 'bidiphase': 0.0, 'do_registration': True, 'keep_movie_raw': False, 'nimg_init': 300, 'batch_size': 500, 'maxregshift': 0.1, 'align_by_chan': 1, 'reg_tif': False, 'reg_tif_chan2': False, 'subpixel': 10, 'smooth_sigma': 1.15, 'th_badframes': 1.0, 'pad_fft': False, 'nonrigid': True, 'block_size': [128, 128], 'snr_thresh': 1.2, 'maxregshiftNR': 5.0, '1Preg': False, 'spatial_hp': 50.0, 'pre_smooth': 2.0, 'spatial_taper': 50.0, 'roidetect': True, 'sparse_mode': False, 'diameter': 12, 'spatial_scale': 0, 'connected': True, 'nbinned': 5000, 'max_iterations': 20, 'threshold_scaling': 1.0, 'max_overlap': 0.75, 'high_pass': 100.0, 'inner_neuropil_radius': 2, 'min_neuropil_pixels': 350, 'allow_overlap': False, 'chan2_thres': 0.65, 'baseline': 'maximin', 'win_baseline': 60.0, 'sig_baseline': 10.0, 'prctile_baseline': 8.0, 'neucoeff': 0.7, 'xrange': np.array([0, 0]), 'yrange': np.array([0, 0])} imaging.ProcessingParamSet.insert_new_params( processing_method='suite2p', paramset_idx=0, params=params_suite2p, paramset_desc='Calcium imaging analysis with Suite2p using default Suite2p parameters') ###Output _____no_output_____ ###Markdown Trigger autoprocessing of the remaining calcium imaging workflow ###Code from workflow_calcium_imaging import process ###Output _____no_output_____ ###Markdown + The `process.run()` function in the workflow populates every auto-processing table in the workflow. If a table is dependent on a manual table upstream, it will not get populated until the manual table is inserted.+ At this stage, process script populates through the table upstream of `ProcessingTask` (i.e. scan.ScanInfo) ###Code process.run() ###Output ScanInfo: 100%|██████████| 1/1 [00:00<00:00, 46.27it/s] Processing: 0it [00:00, ?it/s] MotionCorrection: 0it [00:00, ?it/s] Segmentation: 0it [00:00, ?it/s] MaskClassification: 0it [00:00, ?it/s] Fluorescence: 0it [00:00, ?it/s] Activity: 0it [00:00, ?it/s] ---- Populate imported and computed tables ---- {'scanning_mode': 'bidirectional', 'frame_rate': 7.8125, 'num_frames': 7530, 'num_channels': 1, 'num_planes': 4, 'frame_size': array([512, 796], dtype=uint16), 'num_target_frames': 0, 'num_stored_frames': 30123, 'stage_pos': [0, 0, -311.71], 'stage_angle': 9.65, 'etl_pos': [203, 255, 314, 379], 'filename': 'run00_orientation_8dir_000_000.sbx', 'resonant_freq': 8000, 'scanbox_version': 3, 'records_per_buffer': 256, 'magnification': 1.7, 'um_per_pixel_x': nan, 'um_per_pixel_y': nan, 'objective': 'Nikon_16x_dlr', 'messages': array([], dtype=object), 'event_id': array([], dtype=uint8), 'usernotes': array([], dtype='<U1'), 'ballmotion': array([], dtype='<U1')} ---- Successfully completed workflow_calcium_imaging/process.py ---- ###Markdown Insert new ProcessingTask to trigger ingestion of processing resultsTo populate the rest of the tables in the workflow, an entry in the `ProcessingTask` needs to be added to trigger the ingestion of the processing results, with the two pieces of information specified:+ `paramset_idx` used for the processing job+ output directory storing the processing results ###Code session_key = session.Session.fetch1('KEY') imaging.ProcessingTask.insert1(dict(session_key, scan_id=0, paramset_idx=0, processing_output_dir='subject3/210107_run00_orientation_8dir/suite2p'), skip_duplicates=True) ###Output _____no_output_____ ###Markdown Run populate for table `imaging.Processing` ###Code process.run() ###Output ScanInfo: 0it [00:00, ?it/s] Processing: 100%|██████████| 1/1 [00:00<00:00, 90.57it/s] MotionCorrection: 0it [00:00, ?it/s] Segmentation: 0it [00:00, ?it/s] MaskClassification: 0it [00:00, ?it/s] Fluorescence: 0it [00:00, ?it/s] Activity: 0it [00:00, ?it/s] ---- Populate imported and computed tables ---- ---- Successfully completed workflow_calcium_imaging/process.py ---- ###Markdown Insert new Curation to trigger ingestion of curated results ###Code key = (imaging.ProcessingTask & session_key).fetch1('KEY') imaging.Curation().create1_from_processing_task(key) ###Output _____no_output_____ ###Markdown Run populate for the rest of the tables in the workflow (takes a while) ###Code process.run() ###Output ScanInfo: 0it [00:00, ?it/s] Processing: 0it [00:00, ?it/s] MotionCorrection: 0%| | 0/1 [00:00<?, ?it/s] ---- Populate imported and computed tables ---- MotionCorrection: 100%|██████████| 1/1 [00:06<00:00, 6.65s/it] Segmentation: 100%|██████████| 1/1 [00:03<00:00, 3.25s/it] MaskClassification: 100%|██████████| 1/1 [00:00<00:00, 820.96it/s] Fluorescence: 100%|██████████| 1/1 [00:50<00:00, 50.84s/it] Activity: 100%|██████████| 1/1 [00:08<00:00, 8.01s/it] ---- Successfully completed workflow_calcium_imaging/process.py ---- ###Markdown Run workflow in an automatic wayIn the previous notebook [03-process](03-process.ipynb), we ran through the workflow in detailed steps. For daily running routines, the current notebook provides a more succinct and automatic approach to run through the pipeline using some utility functions in the workflow. ###Code import os os.chdir('..') import numpy as np from workflow_calcium_imaging.pipeline import lab, subject, session, scan, imaging ###Output Connecting [email protected]:3306 ###Markdown Ingestion of subjects, sessions, scans+ Fill subject and session information in files `/user_data/subjects.csv` and `/user_data/sessions.csv`+ Run automatic scripts prepared in `workflow_calcium_imaging.ingest` for ingestion: + `ingest_subjects` - ingests data into subject.Subject + `ingest_sessions` - ingests data into Equipment, session.Session, session.SessionDirectory, scan.Scan ###Code from workflow_calcium_imaging.ingest import ingest_subjects, ingest_sessions ingest_subjects() ingest_sessions() ###Output ---- Insert 1 entry(s) into subject.Subject ---- ---- Successfully completed ingest_subjects ---- {'scanning_mode': 'bidirectional', 'frame_rate': 7.8125, 'num_frames': 7530, 'num_channels': 1, 'num_planes': 4, 'frame_size': array([512, 796], dtype=uint16), 'num_target_frames': 0, 'num_stored_frames': 30123, 'stage_pos': [0, 0, -311.71], 'stage_angle': 9.65, 'etl_pos': [203, 255, 314, 379], 'filename': 'run00_orientation_8dir_000_000.sbx', 'resonant_freq': 8000, 'scanbox_version': 3, 'records_per_buffer': 256, 'magnification': 1.7, 'um_per_pixel_x': nan, 'um_per_pixel_y': nan, 'objective': 'Nikon_16x_dlr', 'messages': array([], dtype=object), 'event_id': array([], dtype=uint8), 'usernotes': array([], dtype='<U1'), 'ballmotion': array([], dtype='<U1')} ---- Insert 1 entry(s) into experiment.Equipment ---- ---- Insert 1 entry(s) into session.Session ---- ---- Insert 1 entry(s) into scan.Scan ---- ---- Successfully completed ingest_sessions ---- ###Markdown (Optional) Insert new ProcessingParamSet for Suite2p or CaImAn+ This is not needed if you are using an existing ProcessingParamSet. ###Code params_suite2p = {'look_one_level_down': 0.0, 'fast_disk': [], 'delete_bin': False, 'mesoscan': False, 'h5py': [], 'h5py_key': 'data', 'save_path0': [], 'subfolders': [], 'nplanes': 1, 'nchannels': 1, 'functional_chan': 1, 'tau': 1.0, 'fs': 10.0, 'force_sktiff': False, 'preclassify': 0.0, 'save_mat': False, 'combined': True, 'aspect': 1.0, 'do_bidiphase': False, 'bidiphase': 0.0, 'do_registration': True, 'keep_movie_raw': False, 'nimg_init': 300, 'batch_size': 500, 'maxregshift': 0.1, 'align_by_chan': 1, 'reg_tif': False, 'reg_tif_chan2': False, 'subpixel': 10, 'smooth_sigma': 1.15, 'th_badframes': 1.0, 'pad_fft': False, 'nonrigid': True, 'block_size': [128, 128], 'snr_thresh': 1.2, 'maxregshiftNR': 5.0, '1Preg': False, 'spatial_hp': 50.0, 'pre_smooth': 2.0, 'spatial_taper': 50.0, 'roidetect': True, 'sparse_mode': False, 'diameter': 12, 'spatial_scale': 0, 'connected': True, 'nbinned': 5000, 'max_iterations': 20, 'threshold_scaling': 1.0, 'max_overlap': 0.75, 'high_pass': 100.0, 'inner_neuropil_radius': 2, 'min_neuropil_pixels': 350, 'allow_overlap': False, 'chan2_thres': 0.65, 'baseline': 'maximin', 'win_baseline': 60.0, 'sig_baseline': 10.0, 'prctile_baseline': 8.0, 'neucoeff': 0.7, 'xrange': np.array([0, 0]), 'yrange': np.array([0, 0])} imaging.ProcessingParamSet.insert_new_params( processing_method='suite2p', paramset_idx=0, params=params_suite2p, paramset_desc='Calcium imaging analysis with Suite2p using default Suite2p parameters') ###Output _____no_output_____ ###Markdown Trigger autoprocessing of the remaining calcium imaging workflow ###Code from workflow_calcium_imaging import process ###Output _____no_output_____ ###Markdown + The `process.run()` function in the workflow populates every auto-processing table in the workflow. If a table is dependent on a manual table upstream, it will not get populated until the manual table is inserted.+ At this stage, process script populates through the table upstream of `ProcessingTask` (i.e. scan.ScanInfo) ###Code process.run() ###Output ScanInfo: 100%|██████████| 1/1 [00:00<00:00, 46.27it/s] Processing: 0it [00:00, ?it/s] MotionCorrection: 0it [00:00, ?it/s] Segmentation: 0it [00:00, ?it/s] MaskClassification: 0it [00:00, ?it/s] Fluorescence: 0it [00:00, ?it/s] Activity: 0it [00:00, ?it/s] ---- Populate imported and computed tables ---- {'scanning_mode': 'bidirectional', 'frame_rate': 7.8125, 'num_frames': 7530, 'num_channels': 1, 'num_planes': 4, 'frame_size': array([512, 796], dtype=uint16), 'num_target_frames': 0, 'num_stored_frames': 30123, 'stage_pos': [0, 0, -311.71], 'stage_angle': 9.65, 'etl_pos': [203, 255, 314, 379], 'filename': 'run00_orientation_8dir_000_000.sbx', 'resonant_freq': 8000, 'scanbox_version': 3, 'records_per_buffer': 256, 'magnification': 1.7, 'um_per_pixel_x': nan, 'um_per_pixel_y': nan, 'objective': 'Nikon_16x_dlr', 'messages': array([], dtype=object), 'event_id': array([], dtype=uint8), 'usernotes': array([], dtype='<U1'), 'ballmotion': array([], dtype='<U1')} ---- Successfully completed workflow_calcium_imaging/process.py ---- ###Markdown Insert new ProcessingTask to trigger ingestion of processing resultsTo populate the rest of the tables in the workflow, an entry in the `ProcessingTask` needs to be added to trigger the ingestion of the processing results, with the two pieces of information specified:+ `paramset_idx` used for the processing job+ output directory storing the processing results ###Code session_key = session.Session.fetch1('KEY') imaging.ProcessingTask.insert1(dict(session_key, scan_id=0, paramset_idx=0, processing_output_dir='subject3/210107_run00_orientation_8dir/suite2p'), skip_duplicates=True) ###Output _____no_output_____ ###Markdown Run populate for table `imaging.Processing` ###Code process.run() ###Output ScanInfo: 0it [00:00, ?it/s] Processing: 100%|██████████| 1/1 [00:00<00:00, 90.57it/s] MotionCorrection: 0it [00:00, ?it/s] Segmentation: 0it [00:00, ?it/s] MaskClassification: 0it [00:00, ?it/s] Fluorescence: 0it [00:00, ?it/s] Activity: 0it [00:00, ?it/s] ---- Populate imported and computed tables ---- ---- Successfully completed workflow_calcium_imaging/process.py ---- ###Markdown Insert new Curation to trigger ingestion of curated results ###Code key = (imaging.ProcessingTask & session_key).fetch1('KEY') imaging.Curation().create1_from_processing_task(key) ###Output _____no_output_____ ###Markdown Run populate for the rest of the tables in the workflow (takes a while) ###Code process.run() ###Output ScanInfo: 0it [00:00, ?it/s] Processing: 0it [00:00, ?it/s] MotionCorrection: 0%| | 0/1 [00:00<?, ?it/s] ---- Populate imported and computed tables ---- MotionCorrection: 100%|██████████| 1/1 [00:06<00:00, 6.65s/it] Segmentation: 100%|██████████| 1/1 [00:03<00:00, 3.25s/it] MaskClassification: 100%|██████████| 1/1 [00:00<00:00, 820.96it/s] Fluorescence: 100%|██████████| 1/1 [00:50<00:00, 50.84s/it] Activity: 100%|██████████| 1/1 [00:08<00:00, 8.01s/it] ---- Successfully completed workflow_calcium_imaging/process.py ---- ###Markdown Run workflow in an automatic wayIn the previous notebook [03-process](03-process.ipynb), we ran through the workflow in detailed steps. For daily running routines, the current notebook provides a more succinct and automatic approach to run through the pipeline using some utility functions in the workflow. ###Code import os os.chdir('..') import numpy as np from workflow_array_ephys.pipeline import lab, subject, session, probe, ephys ###Output Connecting root@localhost:3306 ###Markdown Ingestion of subjects, sessions, probes, probe insertions1. Fill subject and session information in files `/user_data/subjects.csv` and `/user_data/sessions.csv`2. Run automatic scripts prepared in `workflow_array_ephys.ingest` for ingestion ###Code from workflow_array_ephys.ingest import ingest_subjects, ingest_sessions ###Output _____no_output_____ ###Markdown Insert new entries for subject.Subject from the `subjects.csv` file ###Code ingest_subjects() ###Output ---- Insert 1 entry(s) into subject.Subject ---- ###Markdown Insert new entries for session.Session, session.SessionDirectory, probe.Probe, ephys.ProbeInsertions from the `sessions.csv` file ###Code ingest_sessions() ###Output ---- Insert 0 entry(s) into session.Session ---- ---- Insert 0 entry(s) into probe.Probe ---- ---- Insert 0 entry(s) into ephys.ProbeInsertion ---- ---- Successfully completed workflow_array_ephys/ingest.py ---- ###Markdown [Optional] Insert new ClusteringParamSet for KilosortThis is not needed if keep using the existing ClusteringParamSet ###Code params_ks = { "fs": 30000, "fshigh": 150, "minfr_goodchannels": 0.1, "Th": [10, 4], "lam": 10, "AUCsplit": 0.9, "minFR": 0.02, "momentum": [20, 400], "sigmaMask": 30, "ThPr": 8, "spkTh": -6, "reorder": 1, "nskip": 25, "GPU": 1, "Nfilt": 1024, "nfilt_factor": 4, "ntbuff": 64, "whiteningRange": 32, "nSkipCov": 25, "scaleproc": 200, "nPCs": 3, "useRAM": 0 } ephys.ClusteringParamSet.insert_new_params( processing_method='kilosort2', paramset_idx=0, params=params_ks, paramset_desc='Spike sorting using Kilosort2') ###Output _____no_output_____ ###Markdown Trigger autoprocessing of the remaining ephys pipeline ###Code from workflow_array_ephys import process ###Output _____no_output_____ ###Markdown The `process.run()` function in the workflow populates every auto-processing table in the workflow. If a table is dependent on a manual table upstream, it will not get populated until the manual table is inserted. ###Code # At this stage, process script populates through the table upstream of `ClusteringTask` process.run() ###Output ---- Populate ephys.EphysRecording ---- ###Markdown Insert new ClusteringTask to trigger ingestion of clustering resultsTo populate the rest of the tables in the workflow, an entry in the `ClusteringTask` needs to be added to trigger the ingestion of the clustering results, with the two pieces of information specified:+ the `paramset_idx` used for the clustering job+ the output directory storing the clustering results ###Code session_key = session.Session.fetch1('KEY') ephys.ClusteringTask.insert1( dict(session_key, insertion_number=0, paramset_idx=0, clustering_output_dir='subject6/session1/towersTask_g0_imec0'), skip_duplicates=True) # run populate again for table Clustering process.run() ###Output EphysRecording: 0it [00:00, ?it/s] LFP: 0it [00:00, ?it/s] Clustering: 0it [00:00, ?it/s] CuratedClustering: 0it [00:00, ?it/s] WaveformSet: 0it [00:00, ?it/s] ###Markdown Insert new Curation to trigger ingestion of curated results ###Code key = (ephys.ClusteringTask & session_key).fetch1('KEY') ephys.Curation().create1_from_clustering_task(key) # run populate for the rest of the tables in the workflow, takes a while process.run() ###Output EphysRecording: 0it [00:00, ?it/s] LFP: 0it [00:00, ?it/s] Clustering: 0it [00:00, ?it/s] CuratedClustering: 0%| | 0/1 [00:00<?, ?it/s]
notebooks/contributions/xcube_gen_job_api.ipynb
###Markdown xcube Generator Python Access LibraryThis notebook shows how to generate xcube cube using the xcube-gen web service [xcube-gen.brockmann-consult.de](https://xcube-gen.brockmann-consult.de).Please be aware, this notebook will not run unless you have access to the xcube-gen service as well as a bucket on AWS. ###Code from job_api import JobApi api = JobApi() api.whoami ###Output _____no_output_____ ###Markdown Generate a config ###Code import os cfg = { "input_configs": [ { "store_id": "@sentinelhub", "data_id": "S2L2A", "open_params": { "tile_size": [ 1000, 1000 ] } } ], "cube_config": { "variable_names": [ "B01", "B02" ], "bbox": [ 7, 53, 9, 55 ], "spatial_res": 0.001, "crs": "WGS84", "time_range": [ "2000-06-20", "2000-06-22" ], "time_period": "1D" }, "output_config": { "store_id": "s3", "store_params": { "bucket_name": os.environ["AWS_BUCKET"], "aws_access_key_id": os.environ["AWS_ACCESS_KEY_ID"], "aws_secret_access_key": os.environ["AWS_SECRET_ACCESS_KEY"], } } } ###Output _____no_output_____ ###Markdown Generating an xcube ###Code response = api.create(cfg=cfg) job_id = response['result']['job_id'] response ###Output _____no_output_____ ###Markdown Getting the Status of a Generation Job ###Code # wait until job has been created import time time.sleep(8) api.status(job_id) ###Output _____no_output_____ ###Markdown Listing my Jobs ###Code api.list() ###Output _____no_output_____ ###Markdown Deleting a job ###Code api.delete(job_id) ###Output _____no_output_____
C1 Introduction to Data Science in Python/Ch1 Python Fundamentals/Week 1.ipynb
###Markdown ---_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- The Python Programming Language: Functions ###Code x = 1 y = 2 x + y x ###Output _____no_output_____ ###Markdown `add_numbers` is a function that takes two numbers and adds them together. ###Code def add_numbers(x, y): return x + y add_numbers(1, 2) ###Output _____no_output_____ ###Markdown `add_numbers` updated to take an optional 3rd parameter. Using `print` allows printing of multiple expressions within a single cell. ###Code def add_numbers(x,y,z=None): if (z==None): return x+y else: return x+y+z print(add_numbers(1, 2)) print(add_numbers(1, 2, 3)) ###Output 3 6 ###Markdown `add_numbers` updated to take an optional flag parameter. ###Code def add_numbers(x, y, z=None, flag=False): if (flag): print('Flag is true!') if (z==None): return x + y else: return x + y + z print(add_numbers(1, 2, flag=True)) ###Output Flag is true! 3 ###Markdown Assign function `add_numbers` to variable `a`. ###Code def add_numbers(x,y): return x+y a = add_numbers a(1,2) ###Output _____no_output_____ ###Markdown The Python Programming Language: Types and Sequences Use `type` to return the object's type. ###Code type('This is a string') type(None) type(1) type(1.0) type(add_numbers) ###Output _____no_output_____ ###Markdown Tuples are an immutable data structure (cannot be altered). ###Code x = (1, 'a', 2, 'b') type(x) ###Output _____no_output_____ ###Markdown Lists are a mutable data structure. ###Code x = [1, 'a', 2, 'b'] type(x) ###Output _____no_output_____ ###Markdown Use `append` to append an object to a list. ###Code x.append(3.3) print(x) ###Output [1, 'a', 2, 'b', 3.3] ###Markdown This is an example of how to loop through each item in the list. ###Code for item in x: print(item) ###Output 1 a 2 b 3.3 ###Markdown Or using the indexing operator: ###Code i=0 while( i != len(x) ): print(x[i]) i = i + 1 ###Output 1 a 2 b 3.3 ###Markdown Use `+` to concatenate lists. ###Code [1,2] + [3,4] ###Output _____no_output_____ ###Markdown Use `*` to repeat lists. ###Code [1]*3 ###Output _____no_output_____ ###Markdown Use the `in` operator to check if something is inside a list. ###Code 1 in [1, 2, 3] ###Output _____no_output_____ ###Markdown Now let's look at strings. Use bracket notation to slice a string. ###Code x = 'This is a string' print(x[0]) #first character print(x[0:1]) #first character, but we have explicitly set the end character print(x[0:2]) #first two characters ###Output T T Th ###Markdown This will return the last element of the string. ###Code x[-1] ###Output _____no_output_____ ###Markdown This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end. ###Code x[-4:-2] ###Output _____no_output_____ ###Markdown This is a slice from the beginning of the string and stopping before the 3rd element. ###Code x[:3] ###Output _____no_output_____ ###Markdown And this is a slice starting from the 3rd element of the string and going all the way to the end. ###Code x[3:] firstname = 'Christopher' lastname = 'Brooks' print(firstname + ' ' + lastname) print(firstname*3) print('Chris' in firstname) ###Output Christopher Brooks ChristopherChristopherChristopher True ###Markdown `split` returns a list of all the words in a string, or a list split on a specific character. ###Code firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list print(firstname) print(lastname) ###Output Christopher Brooks ###Markdown Make sure you convert objects to strings before concatenating. ###Code 'Chris' + 2 'Chris' + str(2) ###Output _____no_output_____ ###Markdown Dictionaries associate keys with values. ###Code x = {'Christopher Brooks': '[email protected]', 'Bill Gates': '[email protected]'} x['Christopher Brooks'] # Retrieve a value by using the indexing operator x['Kevyn Collins-Thompson'] = None x['Kevyn Collins-Thompson'] ###Output _____no_output_____ ###Markdown Iterate over all of the keys: ###Code for name in x: print(x[name]) ###Output [email protected] [email protected] None ###Markdown Iterate over all of the values: ###Code for email in x.values(): print(email) ###Output [email protected] [email protected] None ###Markdown Iterate over all of the items in the list: ###Code for name, email in x.items(): print(name) print(email) ###Output Bill Gates [email protected] Christopher Brooks [email protected] Kevyn Collins-Thompson None ###Markdown You can unpack a sequence into different variables: ###Code x = ('Christopher', 'Brooks', '[email protected]') fname, lname, email = x fname lname ###Output _____no_output_____ ###Markdown Make sure the number of values you are unpacking matches the number of variables being assigned. ###Code x = ('Christopher', 'Brooks', '[email protected]', 'Ann Arbor') fname, lname, email = x ###Output _____no_output_____ ###Markdown The Python Programming Language: More on Strings ###Code print('Chris' + 2) print('Chris' + str(2)) ###Output Chris2 ###Markdown Python has a built in method for convenient string formatting. ###Code sales_record = { 'price': 3.24, 'num_items': 4, 'person': 'Chris'} sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}' print(sales_statement.format(sales_record['person'], sales_record['num_items'], sales_record['price'], sales_record['num_items']*sales_record['price'])) ###Output Chris bought 4 item(s) at a price of 3.24 each for a total of 12.96 ###Markdown Reading and Writing CSV files Let's import our datafile mpg.csv, which contains fuel economy data for 234 cars. ###Code import csv %precision 2 with open('mpg.csv') as csvfile: mpg = list(csv.DictReader(csvfile)) mpg[:3] # The first three dictionaries in our list. ###Output _____no_output_____ ###Markdown `csv.Dictreader` has read in each row of our csv file as a dictionary. `len` shows that our list is comprised of 234 dictionaries. ###Code len(mpg) ###Output _____no_output_____ ###Markdown `keys` gives us the column names of our csv. ###Code mpg[0].keys() ###Output _____no_output_____ ###Markdown This is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float. ###Code sum( [float(d['cty']) for d in mpg] ) / len(mpg) ###Output _____no_output_____ ###Markdown Similarly this is how to find the average hwy fuel economy across all cars. ###Code sum( [float(d['hwy']) for d in mpg] ) / len(mpg) ###Output _____no_output_____ ###Markdown Use `set` to return the unique values for the number of cylinders the cars in our dataset have. ###Code cylinders = set( [d['cyl'] for d in mpg] ) cylinders ###Output _____no_output_____ ###Markdown Here's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group. ###Code CtyMpgByCyl = [] for c in cylinders: # iterate over all the cylinder levels summpg = 0 cyltypecount = 0 for d in mpg: # iterate over all dictionaries if d['cyl'] == c: # if the cylinder level type matches, summpg += float(d['cty']) # add the cty mpg cyltypecount += 1 # increment the count CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg') CtyMpgByCyl.sort(key=lambda x: x[0]) CtyMpgByCyl ###Output _____no_output_____ ###Markdown Use `set` to return the unique values for the class types in our dataset. ###Code vehicleclass = set( [d['class'] for d in mpg] ) # what are the class types vehicleclass ###Output _____no_output_____ ###Markdown And here's an example of how to find the average hwy mpg for each class of vehicle in our dataset. ###Code HwyMpgByClass = [] for t in vehicleclass: # iterate over all the vehicle classes summpg = 0 vclasscount = 0 for d in mpg: # iterate over all dictionaries if d['class'] == t: # if the cylinder amount type matches, summpg += float(d['hwy']) # add the hwy mpg vclasscount += 1 # increment the count HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg') HwyMpgByClass.sort(key=lambda x: x[1]) HwyMpgByClass ###Output _____no_output_____ ###Markdown The Python Programming Language: Dates and Times ###Code import datetime as dt import time as tm ###Output _____no_output_____ ###Markdown `time` returns the current time in seconds since the Epoch. (January 1st, 1970) ###Code tm.time() ###Output _____no_output_____ ###Markdown Convert the timestamp to datetime. ###Code dtnow = dt.datetime.fromtimestamp(tm.time()) dtnow ###Output _____no_output_____ ###Markdown Handy datetime attributes: ###Code dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime ###Output _____no_output_____ ###Markdown `timedelta` is a duration expressing the difference between two dates. ###Code delta = dt.timedelta(days = 100) # create a timedelta of 100 days delta ###Output _____no_output_____ ###Markdown `date.today` returns the current local date. ###Code today = dt.date.today() today - delta # the date 100 days ago today > today-delta # compare dates ###Output _____no_output_____ ###Markdown The Python Programming Language: Objects and map() An example of a class in python: ###Code class Person: department = 'School of Information' #a class variable def set_name(self, new_name): #a method self.name = new_name def set_location(self, new_location): self.location = new_location person = Person() person.set_name('Christopher Brooks') person.set_location('Ann Arbor, MI, USA') print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department)) ###Output Christopher Brooks live in Ann Arbor, MI, USA and works in the department School of Information ###Markdown Here's an example of mapping the `min` function between two lists. ###Code store1 = [10.00, 11.00, 12.34, 2.34] store2 = [9.00, 11.10, 12.34, 2.01] store3 = [11.00, 9.00, 11.11, 1.02] cheapest1 = map(min, store1, store2, store3) cheapest1 cheapest2 = map(min, store1) cheapest2 ###Output _____no_output_____ ###Markdown Now let's iterate through the map object to see the values. ###Code for item in cheapest1: print(item) for item in cheapest2: print(item) ###Output _____no_output_____ ###Markdown The Python Programming Language: Lambda and List Comprehensions Here's an example of lambda that takes in three parameters and adds the first two. ###Code my_function = lambda a, b, c : a + b my_function(1, 2, 3) ###Output _____no_output_____ ###Markdown Let's iterate from 0 to 999 and return the even numbers. ###Code my_list = [] for number in range(0, 1000): if number % 2 == 0: my_list.append(number) my_list ###Output _____no_output_____ ###Markdown Now the same thing but with list comprehension. ###Code my_list = [number for number in range(0,1000) if number % 2 == 0] my_list ###Output _____no_output_____ ###Markdown The Python Programming Language: Numerical Python (NumPy) ###Code import numpy as np ###Output _____no_output_____ ###Markdown Creating Arrays Create a list and convert it to a numpy array ###Code mylist = [1, 2, 3] x = np.array(mylist) x ###Output _____no_output_____ ###Markdown Or just pass in a list directly ###Code y = np.array([4, 5, 6]) y ###Output _____no_output_____ ###Markdown Pass in a list of lists to create a multidimensional array. ###Code m = np.array([[7, 8, 9], [10, 11, 12]]) m ###Output _____no_output_____ ###Markdown Use the shape method to find the dimensions of the array. (rows, columns) ###Code m.shape ###Output _____no_output_____ ###Markdown `arange` returns evenly spaced values within a given interval. ###Code n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30 n ###Output _____no_output_____ ###Markdown `reshape` returns an array with the same data with a new shape. ###Code n = n.reshape(3, 5) # reshape array to be 3x5 n ###Output _____no_output_____ ###Markdown 'resize' change the shape in the raw data and return None ###Code n.resize(5, 3) n ###Output _____no_output_____ ###Markdown `linspace` returns evenly spaced numbers over a specified interval. ###Code o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4 o ###Output _____no_output_____ ###Markdown `resize` changes the shape and size of array in-place. ###Code o.resize(3, 3) o ###Output _____no_output_____ ###Markdown `ones` returns a new array of given shape and type, filled with ones. ###Code np.ones((3, 2)) ###Output _____no_output_____ ###Markdown `zeros` returns a new array of given shape and type, filled with zeros. ###Code np.zeros((2, 3)) ###Output _____no_output_____ ###Markdown `eye` returns a 2-D array with ones on the diagonal and zeros elsewhere. ###Code np.eye(3) ###Output _____no_output_____ ###Markdown `diag` extracts a diagonal or constructs a diagonal array. ###Code np.diag(y) ###Output _____no_output_____ ###Markdown Create an array using repeating list (or see `np.tile`) ###Code np.array([1, 2, 3] * 3) ###Output _____no_output_____ ###Markdown Repeat elements of an array using `repeat`. ###Code np.repeat([1, 2, 3], 4) ###Output _____no_output_____ ###Markdown Combining Arrays ###Code p = np.ones([2, 3], int) p ###Output _____no_output_____ ###Markdown Use `vstack` to stack arrays in sequence vertically (row wise). ###Code np.vstack([p, 2*p]) ###Output _____no_output_____ ###Markdown Use `hstack` to stack arrays in sequence horizontally (column wise). ###Code np.hstack([p, 2*p]) ###Output _____no_output_____ ###Markdown Operations Use `+`, `-`, `*`, `/` and `**` to perform element wise addition, subtraction, multiplication, division and power. ###Code print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9] print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3] print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18] print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5] print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9] ###Output [1 4 9] ###Markdown **Dot Product:** $ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix}\cdot\begin{bmatrix}y_1 \\ y_2 \\ y_3\end{bmatrix}= x_1 y_1 + x_2 y_2 + x_3 y_3$ ###Code x.dot(y) # dot product 1*4 + 2*5 + 3*6 z = np.array([y, y**2]) print(len(z)) # number of rows of array ###Output 2 ###Markdown Let's look at transposing arrays. Transposing permutes the dimensions of the array. ###Code z = np.array([y, y**2]) z ###Output _____no_output_____ ###Markdown The shape of array `z` is `(2,3)` before transposing. ###Code z.shape ###Output _____no_output_____ ###Markdown Use `.T` to get the transpose. ###Code z.T ###Output _____no_output_____ ###Markdown The number of rows has swapped with the number of columns. ###Code z.T.shape ###Output _____no_output_____ ###Markdown Use `.dtype` to see the data type of the elements in the array. ###Code z.dtype ###Output _____no_output_____ ###Markdown Use `.astype` to cast to a specific type. ###Code z = z.astype('f') z.dtype ###Output _____no_output_____ ###Markdown Math Functions Numpy has many built in math functions that can be performed on arrays. ###Code a = np.array([-4, -2, 1, 3, 5]) a.sum() a.max() a.min() a.mean() a.std() ###Output _____no_output_____ ###Markdown `argmax` and `argmin` return the index of the maximum and minimum values in the array. ###Code a.argmax() a.argmin() ###Output _____no_output_____ ###Markdown Indexing / Slicing ###Code s = np.arange(13)**2 s ###Output _____no_output_____ ###Markdown Use bracket notation to get the value at a specific index. Remember that indexing starts at 0. ###Code s[0], s[4], s[-1] ###Output _____no_output_____ ###Markdown Use `:` to indicate a range. `array[start:stop]`Leaving `start` or `stop` empty will default to the beginning/end of the array. ###Code s[1:5] ###Output _____no_output_____ ###Markdown Use negatives to count from the back. ###Code s[-4:] ###Output _____no_output_____ ###Markdown A second `:` can be used to indicate step-size. `array[start:stop:stepsize]`Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached. ###Code s[-5::-2] ###Output _____no_output_____ ###Markdown Let's look at a multidimensional array. ###Code r = np.arange(36) r.resize((6, 6)) r ###Output _____no_output_____ ###Markdown Use bracket notation to slice: `array[row, column]` ###Code r[2, 2] ###Output _____no_output_____ ###Markdown And use : to select a range of rows or columns ###Code r[3, 3:6] ###Output _____no_output_____ ###Markdown Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column. ###Code r[:2, :-1] ###Output _____no_output_____ ###Markdown This is a slice of the last row, and only every other element. ###Code r[-1, ::2] ###Output _____no_output_____ ###Markdown We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see `np.where`) ###Code r[r > 30] ###Output _____no_output_____ ###Markdown Here we are assigning all values in the array that are greater than 30 to the value of 30. ###Code r[r > 30] = 30 r ###Output _____no_output_____ ###Markdown Copying Data Be careful with copying and modifying arrays in NumPy!`r2` is a slice of `r` ###Code r2 = r[:3,:3] r2 ###Output _____no_output_____ ###Markdown Set this slice's values to zero ([:] selects the entire array) ###Code r2[:] = 0 r2 ###Output _____no_output_____ ###Markdown `r` has also been changed! ###Code r ###Output _____no_output_____ ###Markdown To avoid this, use `r.copy` to create a copy that will not affect the original array ###Code r_copy = r.copy() r_copy ###Output _____no_output_____ ###Markdown Now when r_copy is modified, r will not be changed. ###Code r_copy[:] = 10 print(r_copy, '\n') print(r) ###Output [[10 10 10 10 10 10] [10 10 10 10 10 10] [10 10 10 10 10 10] [10 10 10 10 10 10] [10 10 10 10 10 10] [10 10 10 10 10 10]] [[ 0 0 0 3 4 5] [ 0 0 0 9 10 11] [ 0 0 0 15 16 17] [18 19 20 21 22 23] [24 25 26 27 28 29] [30 30 30 30 30 30]] ###Markdown Iterating Over Arrays Let's create a new 4 by 3 array of random numbers 0-9. ###Code test = np.random.randint(0, 10, (4,3)) test ###Output _____no_output_____ ###Markdown Iterate by row: ###Code for row in test: print(row) ###Output [8 7 2] [2 9 4] [3 0 4] [6 1 7] ###Markdown Iterate by index: ###Code for i in range(len(test)): print(test[i]) ###Output [8 7 2] [2 9 4] [3 0 4] [6 1 7] ###Markdown Iterate by row and index: ###Code for i, row in enumerate(test): print('row', i, 'is', row) ###Output row 0 is [8 7 2] row 1 is [2 9 4] row 2 is [3 0 4] row 3 is [6 1 7] ###Markdown Use `zip` to iterate over multiple iterables. ###Code test2 = test**2 test2 for i, j in zip(test, test2): print(i,'+',j,'=',i+j) ###Output [8 7 2] + [64 49 4] = [72 56 6] [2 9 4] + [ 4 81 16] = [ 6 90 20] [3 0 4] + [ 9 0 16] = [12 0 20] [6 1 7] + [36 1 49] = [42 2 56] ###Markdown Answer of quiz ADBBB ACABA AB ###Code ['a', 'b', 'c'] + [1, 2, 3] type(lambda x: x+1) r = np.arange(36).reshape(6, 6) r r.reshape(36) ###Output _____no_output_____
decorator/Decorator.ipynb
###Markdown * Function decorators: do name rebinding at function definition time, providing a layer of logic that can manage functions and methods, or later calls to them.* Class decorators, do name rebinding at class definition time, providing a layer of logic that can manage classes, or the instances created by later calls to them. In short, decorators provide a way of inserting automatically run code at the end of function and class definition statements. Managing Functions and Classes * Function managers* Class managersClass decorators can also be used to manage class objects directly, instead of or addition to instance creation calls - to augment a class with new methods. Function Decorators ###Code @decorator def F(arg): pass F(99) ###Output _____no_output_____ ###Markdown `decorater` is a one-argument callable object.Call F is equivalent to call whatever decorator retursThe decorator is invoked at decoration time, and the callable it returns is invoked when the original function name is latter called.| Class Decorators The class is automatically passed to the decorator function, and the decorator's result is assigned back to the class name. The net effect is that calling the class name later to create an instance winds up trggering the callable returned by the decorator, which may or may not call the call itself. ###Code def register_symbol_modality(name='defaut'): print("decorated") print(name()()) return name @register_symbol_modality class A: def __call__(self): print("in A") A() ###Output _____no_output_____ ###Markdown How does the wrapper has access to the outside function. this is because of LEGB rule which does the magic Solution to problem, where we can use it Example 1: timing When we have an argument ###Code @once(5) def add(a, b): return a + b #think of this as add = once(5)add() #we have foour callabale ###Output _____no_output_____
components/gcp/dataproc/submit_pyspark_job/sample.ipynb
###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.3.0/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.0/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.1/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-alpha.2/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/ff116b6f1a0f0cdaafb64fcd04214c169045e6fc/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown Dataproc - Submit PySpark Job Intended UseA Kubeflow Pipeline component to submit a PySpark job to Google Cloud Dataproc service. Run-Time Parameters:Name | Description:--- | :----------project_id | Required. The ID of the Google Cloud Platform project that the cluster belongs to.region | Required. The Cloud Dataproc region in which to handle the request.cluster_name | Required. The cluster to run the job.main_python_file_uri | Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.args | Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.pyspark_job | Optional. The full payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob).job | Optional. The full payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs).wait_interval | Optional. The wait seconds between polling the operation. Defaults to 30s. Output:Name | Description:--- | :----------job_id | The ID of the created job. SampleNote: the sample code below works in both IPython notebook or python code directly. Setup a Dataproc clusterFollow the [guide](https://cloud.google.com/dataproc/docs/guides/create-cluster) to create a new Dataproc cluster or reuse an existing one. Prepare PySpark jobUpload your PySpark code file to a Google Cloud Storage (GCS) bucket. For example, here is a public accessible hello-world.py in GCS: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' SUBMIT_PYSPARK_JOB_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/dataproc/submit_pyspark_job/component.yaml' ###Output _____no_output_____ ###Markdown Install KFP SDKInstall the SDK (Uncomment the code if the SDK is not installed before) ###Code # KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.12/kfp.tar.gz' # !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown Load component definitions ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url(SUBMIT_PYSPARK_JOB_SPEC_URI) display(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown Here is an illustrative pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op(project_id, region, cluster_name, main_python_file_uri, args, pyspark_job, job, wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/v1.7.0-alpha.3/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/2df775a28045bda15372d6dd4644f71dcfe41bfe/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.3/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/e4d9e2b67cf39c5f12b9c1477cae11feb1a74dc7/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/0e794e8a0eff6f81ddc857946ee8311c7c431ec2/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.2/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.1/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.1-beta.1/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/0b07e456b1f319d8b7a7301274f55c00fda9f537/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/4e7e6e866c1256e641b0c3effc55438e6e4b30f6/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.6.0-rc.0/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown Submitting a PySpark Job to Cloud DataprocA Kubeflow Pipeline component to submit a PySpark job to Google Cloud Dataproc service. Intended UseUse the component to run an Apache PySpark job as one preprocessing step in a KFP pipeline. Runtime argumentsName | Description | Type | Optional | Default:--- | :---------- | :--- | :------- | :------project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | GCPProjectID | No |region | The Dataproc region that handles the request. | GCPRegion | No |cluster_name | The name of the cluster that runs the job. | String | No |main_python_file_uri | The Hadoop Compatible Filesystem (HCFS) URI of the main Python file to use as the driver. Must be a .py file. | GCSPath | No |args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | List | Yes | `[]`pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Dict | Yes | `{}`job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Dict | Yes | `{}`wait_interval | The number of seconds to pause between polling the operation. | Integer | Yes | `30` OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Setup project by following the [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component is running under a secret of [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```* Grant Kubeflow user service account the `roles/dataproc.editor` role on the project. Detailed DescriptionThis component creates a PySpark job from [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Here are the steps to use the component in a pipeline:1. Install KFP SDK ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown For more information about the component, please checkout:* [Component python code](https://github.com/kubeflow/pipelines/blob/master/component_sdk/python/kfp_component/google/dataproc/_submit_pyspark_job.py)* [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataproc/submit_pyspark_job/sample.ipynb)* [Dataproc PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob) SampleNote: the sample code below works in both IPython notebook or python code directly. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, thisis a publicly accessible hello-world.py in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-alpha.1/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/02c991dd265054b040265b3dfa1903d5b49df859/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/38771da09094640cd2786a4b5130b26ea140f864/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/0ad0b368802eca8ca73b40fe08adb6d97af6a62f/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1f65a564d4d44fa5a0dc6c59929ca2211ebb3d1c/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.6.0/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/d0aa15dfb3ff618e8cd1b03f86804ec4307fd9c2/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.2.0/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.1/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/06401ecc8f1561509ef095901a70b3543c2ca30f/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/f379080516a34d9c257a198cde9ac219d625ab84/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2-rc.1/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/e598176c02f45371336ccaa819409e8ec83743df/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/3f4b80127f35e40760eeb1813ce1d3f641502222/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/a8d3b6977df26a89701cd229f01c1840a8475521/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.0.0/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/38771da09094640cd2786a4b5130b26ea140f864/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/e8524eefb138725fc06600d1956da0f4dd477178/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.2/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/2e52e54166795d20e92d287bde7b800b181eda02/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.0.4/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/e7a021ed1da6b0ff21f7ba30422decbdcdda0c20/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown Dataproc - Submit PySpark Job Intended UseA Kubeflow Pipeline component to submit a PySpark job to Google Cloud Dataproc service. Run-Time Parameters:Name | Description:--- | :----------project_id | Required. The ID of the Google Cloud Platform project that the cluster belongs to.region | Required. The Cloud Dataproc region in which to handle the request.cluster_name | Required. The cluster to run the job.main_python_file_uri | Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.args | Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.pyspark_job | Optional. The full payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob).job | Optional. The full payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs).wait_interval | Optional. The wait seconds between polling the operation. Defaults to 30s. Output:Name | Description:--- | :----------job_id | The ID of the created job. SampleNote: the sample code below works in both IPython notebook or python code directly. Setup a Dataproc clusterFollow the [guide](https://cloud.google.com/dataproc/docs/guides/create-cluster) to create a new Dataproc cluster or reuse an existing one. Prepare PySpark jobUpload your PySpark code file to a Google Cloud Storage (GCS) bucket. For example, here is a public accessible hello-world.py in GCS: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' SUBMIT_PYSPARK_JOB_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/e5b0081cdcbef6a056c0da114d2eb81ab8d8152d/components/gcp/dataproc/submit_pyspark_job/component.yaml' ###Output _____no_output_____ ###Markdown Install KFP SDKInstall the SDK (Uncomment the code if the SDK is not installed before) ###Code # KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.12/kfp.tar.gz' # !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown Load component definitions ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url(SUBMIT_PYSPARK_JOB_SPEC_URI) display(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown Here is an illustrative pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op(project_id, region, cluster_name, main_python_file_uri, args, pyspark_job, job, wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0-rc.1/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/caa2dc56f29b0dce5216bec390b1685fc0cdc4b7/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr !pip3 install kfp --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.0-alpha.1/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/48dd338c8ab328084633c51704cda77db79ac8c2/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/eb830cd73ca148e5a1a6485a9374c2dc068314bc/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/74d8e592174ae90175f66c3c00ba76a835cfba6d/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____ ###Markdown NameData preparation using PySpark on Cloud Dataproc LabelCloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components SummaryA Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended useUse the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------------------|------------|----------|--------------|-----------------|---------|| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | || cluster_name | The name of the cluster to run the job. | No | String | | || main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | || args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None || pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None || job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None | OutputName | Description | Type:--- | :---------- | :---job_id | The ID of the created job. | String Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK: ###Code %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ###Output _____no_output_____ ###Markdown 2. Load the component using KFP SDK ###Code import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/a97f1d0ad0e7b92203f35c5b0b9af3a314952e05/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) ###Output _____no_output_____ ###Markdown SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code. Prepare a PySpark jobUpload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage: ###Code !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py ###Output _____no_output_____ ###Markdown Set sample parameters ###Code PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' ###Output _____no_output_____ ###Markdown Example pipeline that uses the component ###Code import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) ###Output _____no_output_____ ###Markdown Compile the pipeline ###Code pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ###Output _____no_output_____ ###Markdown Submit the pipeline for execution ###Code #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ###Output _____no_output_____
Tensorflow Object Detection/detect_object_in_webcam_video.ipynb
###Markdown Detection Objects in a webcam image streamThis notebook is based on the [official Tensorflow Object Detection demo](https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb) and only contains some slight changes. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. Imports ###Code import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') ###Output _____no_output_____ ###Markdown Env setup ###Code # This is needed to display the images. %matplotlib inline ###Output _____no_output_____ ###Markdown Object detection importsHere are the imports from the object detection module. ###Code from utils import label_map_util from utils import visualization_utils as vis_util ###Output C:\Users\Gilbert\Downloads\models\research\object_detection\utils\visualization_utils.py:26: UserWarning: matplotlib.pyplot as already been imported, this call will have no effect. import matplotlib; matplotlib.use('Agg') # pylint: disable=multiple-statements ###Markdown Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ###Code # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') ###Output _____no_output_____ ###Markdown Download Model ###Code opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) ###Output _____no_output_____ ###Markdown Load a (frozen) Tensorflow model into memory. ###Code detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) ###Output _____no_output_____ ###Markdown Detection ###Code def run_inference_for_single_image(image, graph): if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict import cv2 cap = cv2.VideoCapture(0) try: with detection_graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) while True: ret, image_np = cap.read() # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) cv2.imshow('object_detection', cv2.resize(image_np, (800, 600))) if cv2.waitKey(25) & 0xFF == ord('q'): cap.release() cv2.destroyAllWindows() break except Exception as e: print(e) cap.release() ###Output _____no_output_____ ###Markdown Detection Objects in a webcam image stream Run in Google Colab View source on GitHub This notebook is based on the [official Tensorflow Object Detection demo](https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb) and only contains some slight changes. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. Imports ###Code import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') ###Output _____no_output_____ ###Markdown Env setup ###Code # This is needed to display the images. %matplotlib inline ###Output _____no_output_____ ###Markdown Object detection importsHere are the imports from the object detection module. ###Code from utils import label_map_util from utils import visualization_utils as vis_util ###Output C:\Users\Gilbert\Downloads\models\research\object_detection\utils\visualization_utils.py:26: UserWarning: matplotlib.pyplot as already been imported, this call will have no effect. import matplotlib; matplotlib.use('Agg') # pylint: disable=multiple-statements ###Markdown Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ###Code # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') ###Output _____no_output_____ ###Markdown Download Model ###Code opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) ###Output _____no_output_____ ###Markdown Load a (frozen) Tensorflow model into memory. ###Code detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) ###Output _____no_output_____ ###Markdown Detection ###Code def run_inference_for_single_image(image, graph): if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict import cv2 cap = cv2.VideoCapture(0) try: with detection_graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) while True: ret, image_np = cap.read() # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) cv2.imshow('object_detection', cv2.resize(image_np, (800, 600))) if cv2.waitKey(25) & 0xFF == ord('q'): cap.release() cv2.destroyAllWindows() break except Exception as e: print(e) cap.release() ###Output _____no_output_____ ###Markdown Detection Objects in a webcam image stream Run in Google Colab View source on GitHub This notebook is based on the [official Tensorflow Object Detection demo](https://github.com/tensorflow/models/blob/r1.13.0/research/object_detection/object_detection_tutorial.ipynb) and only contains some slight changes. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.mdinstallation) before you start. Imports ###Code import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf from distutils.version import StrictVersion # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') ###Output _____no_output_____ ###Markdown Env setup ###Code # This is needed to display the images. %matplotlib inline ###Output _____no_output_____ ###Markdown Object detection importsHere are the imports from the object detection module. ###Code from utils import label_map_util from utils import visualization_utils as vis_util ###Output C:\Users\Gilbert\Downloads\models\research\object_detection\utils\visualization_utils.py:26: UserWarning: matplotlib.pyplot as already been imported, this call will have no effect. import matplotlib; matplotlib.use('Agg') # pylint: disable=multiple-statements ###Markdown Model preparation Variables Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ###Code # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') ###Output _____no_output_____ ###Markdown Download Model ###Code opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) ###Output _____no_output_____ ###Markdown Load a (frozen) Tensorflow model into memory. ###Code detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) ###Output _____no_output_____ ###Markdown Detection ###Code def run_inference_for_single_image(image, graph): if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict import cv2 cap = cv2.VideoCapture(0) try: with detection_graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) while True: ret, image_np = cap.read() # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) cv2.imshow('object_detection', cv2.resize(image_np, (800, 600))) if cv2.waitKey(25) & 0xFF == ord('q'): cap.release() cv2.destroyAllWindows() break except Exception as e: print(e) cap.release() ###Output _____no_output_____ ###Markdown Detection Objects in a webcam image streamThis notebook is based on the [official Tensorflow Object Detection demo](https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb) and only contains some slight changes. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. Imports ###Code import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') ###Output _____no_output_____ ###Markdown Env setup ###Code # This is needed to display the images. %matplotlib inline ###Output _____no_output_____ ###Markdown Object detection importsHere are the imports from the object detection module. ###Code from utils import label_map_util from utils import visualization_utils as vis_util ###Output C:\Users\Gilbert\Downloads\models\research\object_detection\utils\visualization_utils.py:26: UserWarning: matplotlib.pyplot as already been imported, this call will have no effect. import matplotlib; matplotlib.use('Agg') # pylint: disable=multiple-statements ###Markdown Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ###Code # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') ###Output _____no_output_____ ###Markdown Download Model ###Code opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) ###Output _____no_output_____ ###Markdown Load a (frozen) Tensorflow model into memory. ###Code detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) ###Output _____no_output_____ ###Markdown Detection ###Code def run_inference_for_single_image(image, graph): if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict import cv2 cap = cv2.VideoCapture(0) try: with detection_graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) while True: ret, image_np = cap.read() # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) cv2.imshow('object_detection', cv2.resize(image_np, (800, 600))) if cv2.waitKey(25) & 0xFF == ord('q'): cap.release() cv2.destroyAllWindows() break except Exception as e: print(e) cap.release() ###Output _____no_output_____
docs/tutorials/calibrated_data_exploration.ipynb
###Markdown Explore Calibrated Data ###Code import ctapipe from ctapipe.utils.datasets import get_dataset_path from ctapipe.io import event_source, EventSeeker from ctapipe.visualization import CameraDisplay from ctapipe.instrument import CameraGeometry from matplotlib import pyplot as plt from astropy import units as u import numpy as np %matplotlib inline plt.style.use("ggplot") print(ctapipe.__version__) print(ctapipe.__file__) ###Output _____no_output_____ ###Markdown Let's first open a raw event file and get an event out of it: ###Code filename = get_dataset_path("gamma_test_large.simtel.gz") source = event_source(filename, max_events=2) for event in source: print(event.r0.event_id) filename source event print(event.r1) ###Output _____no_output_____ ###Markdown Perform basic calibration:Here we will use a `CameraCalibrator` which is just a simple wrapper that runs the three calibraraton and trace-integration phases of the pipeline, taking the data from levels: **R0** &rightarrow; **R1** &rightarrow; **DL0** &rightarrow; **DL1**You could of course do these each separately, by using the classes `R1Calibrator`, `DL0Reducer`, and `DL1Calibrator`.Note that we have not specified any configuration to the `CameraCalibrator`, so it will be using the default algorithms and thresholds, other than specifying that the product is a "HESSIOR1Calibrator" (hopefully in the near future that will be automatic). ###Code from ctapipe.calib import CameraCalibrator calib = CameraCalibrator(subarray=source.subarray) calib(event) ###Output _____no_output_____ ###Markdown Now the *r1*, *dl0* and *dl1* containers are filled in the event* **r1.tel[x]**: contains the "r1-calibrated" waveforms, after gain-selection, pedestal subtraciton, and gain-correction* **dl0.tel[x]**: is the same but with optional data volume reduction (some pixels not filled), in this case this is not performed by default, so it is the same as r1* **dl1.tel[x]**: contains the (possibly re-calibrated) waveforms as dl0, but also the time-integrated *image* that has been calculated using a `ImageExtractor` (a `NeighborPeakWindowSum` by default) ###Code for tel_id in event.dl1.tel: print("TEL{:03}: {}".format(tel_id, event.inst.subarray.tel[tel_id])) print(" - r0 wave shape : {}".format(event.r0.tel[tel_id].waveform.shape)) print(" - r1 wave shape : {}".format(event.r1.tel[tel_id].waveform.shape)) print(" - dl1 image shape : {}".format(event.dl1.tel[tel_id].image.shape)) ###Output _____no_output_____ ###Markdown Some image processing:Let's look at the image ###Code from ctapipe.visualization import CameraDisplay tel_id = sorted(event.r1.tels_with_data)[1] sub = event.inst.subarray camera = sub.tel[tel_id].camera image = event.dl1.tel[tel_id].image disp = CameraDisplay(camera, image=image) from ctapipe.image import tailcuts_clean, hillas_parameters mask = tailcuts_clean(camera, image, picture_thresh=10, boundary_thresh=5, min_number_picture_neighbors=2) cleaned = image.copy() cleaned[~mask] = 0 disp = CameraDisplay(camera, image=cleaned) params = hillas_parameters(camera, cleaned) print(params) params params = hillas_parameters(camera, cleaned) plt.figure(figsize=(10,10)) disp = CameraDisplay(camera, image=image) disp.add_colorbar() disp.overlay_moments(params, color='red', lw=3) disp.highlight_pixels(mask, color='white', alpha=0.3, linewidth=2) plt.xlim(params.x.to_value(u.m) - 0.5, params.x.to_value(u.m) + 0.5) plt.ylim(params.y.to_value(u.m) - 0.5, params.y.to_value(u.m) + 0.5) source.metadata ###Output _____no_output_____ ###Markdown More complex image processing:Let's now explore how stereo reconstruction works. first, look at a summed image from multiple telescopesFor this, we want to use a `CameraDisplay` again, but since we can't sum and display images with different cameras, we'll just sub-select images from a particular camera typeThese are the telescopes that are in this event: ###Code tels_in_event = set(event.dl1.tel.keys()) # use a set here, so we can intersect it later tels_in_event cam_ids = set(sub.get_tel_ids_for_type("MST_MST_FlashCam")) cams_in_event = tels_in_event.intersection(cam_ids) first_tel_id = list(cams_in_event)[0] tel = sub.tel[first_tel_id] print("{}s in event: {}".format(tel, cams_in_event)) ###Output _____no_output_____ ###Markdown Now, let's sum those images: ###Code image_sum = np.zeros_like(tel.camera.pix_x.value) # just make an array of 0's in the same shape as the camera for tel_id in cams_in_event: image_sum += event.dl1.tel[tel_id].image ###Output _____no_output_____ ###Markdown And finally display the sum of those images ###Code plt.figure(figsize=(8,8)) disp = CameraDisplay(tel.camera, image=image_sum) disp.overlay_moments(params, with_label=False) plt.title("Sum of {}x {}".format(len(cams_in_event), tel)) ###Output _____no_output_____ ###Markdown let's also show which telescopes those were. Note that currently ArrayDisplay's value field is a vector by `tel_index`, not `tel_id`, so we have to convert to a tel_index. (this may change in a future version to be more user-friendly) ###Code from ctapipe.visualization import ArrayDisplay nectarcam_subarray = sub.select_subarray("FlashCam", cam_ids) hit_pattern = np.zeros(shape=nectarcam_subarray.num_tels) hit_pattern[[nectarcam_subarray.tel_indices[x] for x in cams_in_event ]] = 100 plt.set_cmap(plt.cm.Accent) plt.figure(figsize=(8,8)) ad = ArrayDisplay(nectarcam_subarray) ad.values = hit_pattern ad.add_labels() ###Output _____no_output_____ ###Markdown Explore Calibrated Data ###Code import ctapipe from ctapipe.utils.datasets import get_dataset_path from ctapipe.io import EventSource, EventSeeker from ctapipe.visualization import CameraDisplay from ctapipe.instrument import CameraGeometry from matplotlib import pyplot as plt from astropy import units as u import numpy as np %matplotlib inline plt.style.use("ggplot") print(ctapipe.__version__) print(ctapipe.__file__) ###Output _____no_output_____ ###Markdown Let's first open a raw event file and get an event out of it: ###Code filename = get_dataset_path("gamma_test.simtel.gz") source = EventSource(filename, max_events=2) for event in source: print(event.index.event_id) filename source event print(event.r1) ###Output _____no_output_____ ###Markdown Perform basic calibration:Here we will use a `CameraCalibrator` which is just a simple wrapper that runs the three calibraraton and trace-integration phases of the pipeline, taking the data from levels: **R0** &rightarrow; **R1** &rightarrow; **DL0** &rightarrow; **DL1**You could of course do these each separately, by using the classes `R1Calibrator`, `DL0Reducer`, and `DL1Calibrator`.Note that we have not specified any configuration to the `CameraCalibrator`, so it will be using the default algorithms and thresholds, other than specifying that the product is a "HESSIOR1Calibrator" (hopefully in the near future that will be automatic). ###Code from ctapipe.calib import CameraCalibrator calib = CameraCalibrator(subarray=source.subarray) calib(event) ###Output _____no_output_____ ###Markdown Now the *r1*, *dl0* and *dl1* containers are filled in the event* **r1.tel[x]**: contains the "r1-calibrated" waveforms, after gain-selection, pedestal subtraciton, and gain-correction* **dl0.tel[x]**: is the same but with optional data volume reduction (some pixels not filled), in this case this is not performed by default, so it is the same as r1* **dl1.tel[x]**: contains the (possibly re-calibrated) waveforms as dl0, but also the time-integrated *image* that has been calculated using a `ImageExtractor` (a `NeighborPeakWindowSum` by default) ###Code for tel_id in event.dl1.tel: print("TEL{:03}: {}".format(tel_id, source.subarray.tel[tel_id])) print(" - r0 wave shape : {}".format(event.r0.tel[tel_id].waveform.shape)) print(" - r1 wave shape : {}".format(event.r1.tel[tel_id].waveform.shape)) print(" - dl1 image shape : {}".format(event.dl1.tel[tel_id].image.shape)) ###Output _____no_output_____ ###Markdown Some image processing:Let's look at the image ###Code from ctapipe.visualization import CameraDisplay tel_id = sorted(event.r1.tel.keys())[1] sub = source.subarray geometry = sub.tel[tel_id].camera.geometry image = event.dl1.tel[tel_id].image disp = CameraDisplay(geometry, image=image) from ctapipe.image import tailcuts_clean, hillas_parameters mask = tailcuts_clean(geometry, image, picture_thresh=10, boundary_thresh=5, min_number_picture_neighbors=2) cleaned = image.copy() cleaned[~mask] = 0 disp = CameraDisplay(geometry, image=cleaned) params = hillas_parameters(geometry, cleaned) print(params) params params = hillas_parameters(geometry, cleaned) plt.figure(figsize=(10,10)) disp = CameraDisplay(geometry, image=image) disp.add_colorbar() disp.overlay_moments(params, color='red', lw=3) disp.highlight_pixels(mask, color='white', alpha=0.3, linewidth=2) plt.xlim(params.x.to_value(u.m) - 0.5, params.x.to_value(u.m) + 0.5) plt.ylim(params.y.to_value(u.m) - 0.5, params.y.to_value(u.m) + 0.5) source.metadata ###Output _____no_output_____ ###Markdown More complex image processing:Let's now explore how stereo reconstruction works. first, look at a summed image from multiple telescopesFor this, we want to use a `CameraDisplay` again, but since we can't sum and display images with different cameras, we'll just sub-select images from a particular camera typeThese are the telescopes that are in this event: ###Code tels_in_event = set(event.dl1.tel.keys()) # use a set here, so we can intersect it later tels_in_event cam_ids = set(sub.get_tel_ids_for_type("MST_MST_NectarCam")) cam_ids cams_in_event = tels_in_event.intersection(cam_ids) first_tel_id = list(cams_in_event)[0] tel = sub.tel[first_tel_id] print("{}s in event: {}".format(tel, cams_in_event)) ###Output _____no_output_____ ###Markdown Now, let's sum those images: ###Code image_sum = np.zeros_like(tel.camera.geometry.pix_x.value) # just make an array of 0's in the same shape as the camera for tel_id in cams_in_event: image_sum += event.dl1.tel[tel_id].image ###Output _____no_output_____ ###Markdown And finally display the sum of those images ###Code plt.figure(figsize=(8,8)) disp = CameraDisplay(tel.camera.geometry, image=image_sum) disp.overlay_moments(params, with_label=False) plt.title("Sum of {}x {}".format(len(cams_in_event), tel)) ###Output _____no_output_____ ###Markdown let's also show which telescopes those were. Note that currently ArrayDisplay's value field is a vector by `tel_index`, not `tel_id`, so we have to convert to a tel_index. (this may change in a future version to be more user-friendly) ###Code from ctapipe.visualization import ArrayDisplay nectarcam_subarray = sub.select_subarray("FlashCam", cam_ids) hit_pattern = np.zeros(shape=nectarcam_subarray.num_tels) hit_pattern[[nectarcam_subarray.tel_indices[x] for x in cams_in_event ]] = 100 plt.set_cmap(plt.cm.Accent) plt.figure(figsize=(8,8)) ad = ArrayDisplay(nectarcam_subarray) ad.values = hit_pattern ad.add_labels() ###Output _____no_output_____ ###Markdown Explore Calibrated Data ###Code import ctapipe from ctapipe.utils.datasets import get_dataset_path from ctapipe.io import EventSource, EventSeeker from ctapipe.visualization import CameraDisplay from ctapipe.instrument import CameraGeometry from matplotlib import pyplot as plt from astropy import units as u import numpy as np %matplotlib inline plt.style.use("ggplot") print(ctapipe.__version__) print(ctapipe.__file__) ###Output _____no_output_____ ###Markdown Let's first open a raw event file and get an event out of it: ###Code filename = get_dataset_path("gamma_test.simtel.gz") source = EventSource(filename, max_events=2) for event in source: print(event.index.event_id) filename source event print(event.r1) ###Output _____no_output_____ ###Markdown Perform basic calibration:Here we will use a `CameraCalibrator` which is just a simple wrapper that runs the three calibraraton and trace-integration phases of the pipeline, taking the data from levels: **R0** &rightarrow; **R1** &rightarrow; **DL0** &rightarrow; **DL1**You could of course do these each separately, by using the classes `R1Calibrator`, `DL0Reducer`, and `DL1Calibrator`.Note that we have not specified any configuration to the `CameraCalibrator`, so it will be using the default algorithms and thresholds, other than specifying that the product is a "HESSIOR1Calibrator" (hopefully in the near future that will be automatic). ###Code from ctapipe.calib import CameraCalibrator calib = CameraCalibrator(subarray=source.subarray) calib(event) ###Output _____no_output_____ ###Markdown Now the *r1*, *dl0* and *dl1* containers are filled in the event* **r1.tel[x]**: contains the "r1-calibrated" waveforms, after gain-selection, pedestal subtraciton, and gain-correction* **dl0.tel[x]**: is the same but with optional data volume reduction (some pixels not filled), in this case this is not performed by default, so it is the same as r1* **dl1.tel[x]**: contains the (possibly re-calibrated) waveforms as dl0, but also the time-integrated *image* that has been calculated using a `ImageExtractor` (a `NeighborPeakWindowSum` by default) ###Code for tel_id in event.dl1.tel: print("TEL{:03}: {}".format(tel_id, source.subarray.tel[tel_id])) print(" - r0 wave shape : {}".format(event.r0.tel[tel_id].waveform.shape)) print(" - r1 wave shape : {}".format(event.r1.tel[tel_id].waveform.shape)) print(" - dl1 image shape : {}".format(event.dl1.tel[tel_id].image.shape)) ###Output _____no_output_____ ###Markdown Some image processing:Let's look at the image ###Code from ctapipe.visualization import CameraDisplay tel_id = sorted(event.r1.tel.keys())[1] sub = source.subarray geometry = sub.tel[tel_id].camera.geometry image = event.dl1.tel[tel_id].image disp = CameraDisplay(geometry, image=image) from ctapipe.image import tailcuts_clean, hillas_parameters mask = tailcuts_clean(geometry, image, picture_thresh=10, boundary_thresh=5, min_number_picture_neighbors=2) cleaned = image.copy() cleaned[~mask] = 0 disp = CameraDisplay(geometry, image=cleaned) params = hillas_parameters(geometry, cleaned) print(params) params params = hillas_parameters(geometry, cleaned) plt.figure(figsize=(10,10)) disp = CameraDisplay(geometry, image=image) disp.add_colorbar() disp.overlay_moments(params, color='red', lw=3) disp.highlight_pixels(mask, color='white', alpha=0.3, linewidth=2) plt.xlim(params.x.to_value(u.m) - 0.5, params.x.to_value(u.m) + 0.5) plt.ylim(params.y.to_value(u.m) - 0.5, params.y.to_value(u.m) + 0.5) source.metadata ###Output _____no_output_____ ###Markdown More complex image processing:Let's now explore how stereo reconstruction works. first, look at a summed image from multiple telescopesFor this, we want to use a `CameraDisplay` again, but since we can't sum and display images with different cameras, we'll just sub-select images from a particular camera typeThese are the telescopes that are in this event: ###Code tels_in_event = set(event.dl1.tel.keys()) # use a set here, so we can intersect it later tels_in_event cam_ids = set(sub.get_tel_ids_for_type("MST_MST_NectarCam")) cam_ids cams_in_event = tels_in_event.intersection(cam_ids) first_tel_id = list(cams_in_event)[0] tel = sub.tel[first_tel_id] print("{}s in event: {}".format(tel, cams_in_event)) ###Output _____no_output_____ ###Markdown Now, let's sum those images: ###Code image_sum = np.zeros_like(tel.camera.geometry.pix_x.value) # just make an array of 0's in the same shape as the camera for tel_id in cams_in_event: image_sum += event.dl1.tel[tel_id].image ###Output _____no_output_____ ###Markdown And finally display the sum of those images ###Code plt.figure(figsize=(8,8)) disp = CameraDisplay(tel.camera.geometry, image=image_sum) disp.overlay_moments(params, with_label=False) plt.title("Sum of {}x {}".format(len(cams_in_event), tel)) ###Output _____no_output_____ ###Markdown let's also show which telescopes those were. Note that currently ArrayDisplay's value field is a vector by `tel_index`, not `tel_id`, so we have to convert to a tel_index. (this may change in a future version to be more user-friendly) ###Code from ctapipe.visualization import ArrayDisplay nectarcam_subarray = sub.select_subarray(cam_ids, name="NectarCam") hit_pattern = np.zeros(shape=nectarcam_subarray.num_tels) hit_pattern[[nectarcam_subarray.tel_indices[x] for x in cams_in_event ]] = 100 plt.set_cmap(plt.cm.Accent) plt.figure(figsize=(8,8)) ad = ArrayDisplay(nectarcam_subarray) ad.values = hit_pattern ad.add_labels() ###Output _____no_output_____ ###Markdown Explore Calibrated Data ###Code import ctapipe from ctapipe.utils.datasets import get_dataset_path from ctapipe.io import event_source, EventSeeker from ctapipe.visualization import CameraDisplay from ctapipe.instrument import CameraGeometry from matplotlib import pyplot as plt from astropy import units as u import numpy as np %matplotlib inline plt.style.use("ggplot") print(ctapipe.__version__) print(ctapipe.__file__) ###Output _____no_output_____ ###Markdown Let's first open a raw event file and get an event out of it: ###Code filename = get_dataset_path("gamma_test_large.simtel.gz") source = event_source(filename, max_events=2) for event in source: print(event.index.event_id) filename source event print(event.r1) ###Output _____no_output_____ ###Markdown Perform basic calibration:Here we will use a `CameraCalibrator` which is just a simple wrapper that runs the three calibraraton and trace-integration phases of the pipeline, taking the data from levels: **R0** &rightarrow; **R1** &rightarrow; **DL0** &rightarrow; **DL1**You could of course do these each separately, by using the classes `R1Calibrator`, `DL0Reducer`, and `DL1Calibrator`.Note that we have not specified any configuration to the `CameraCalibrator`, so it will be using the default algorithms and thresholds, other than specifying that the product is a "HESSIOR1Calibrator" (hopefully in the near future that will be automatic). ###Code from ctapipe.calib import CameraCalibrator calib = CameraCalibrator(subarray=source.subarray) calib(event) ###Output _____no_output_____ ###Markdown Now the *r1*, *dl0* and *dl1* containers are filled in the event* **r1.tel[x]**: contains the "r1-calibrated" waveforms, after gain-selection, pedestal subtraciton, and gain-correction* **dl0.tel[x]**: is the same but with optional data volume reduction (some pixels not filled), in this case this is not performed by default, so it is the same as r1* **dl1.tel[x]**: contains the (possibly re-calibrated) waveforms as dl0, but also the time-integrated *image* that has been calculated using a `ImageExtractor` (a `NeighborPeakWindowSum` by default) ###Code for tel_id in event.dl1.tel: print("TEL{:03}: {}".format(tel_id, source.subarray.tel[tel_id])) print(" - r0 wave shape : {}".format(event.r0.tel[tel_id].waveform.shape)) print(" - r1 wave shape : {}".format(event.r1.tel[tel_id].waveform.shape)) print(" - dl1 image shape : {}".format(event.dl1.tel[tel_id].image.shape)) ###Output _____no_output_____ ###Markdown Some image processing:Let's look at the image ###Code from ctapipe.visualization import CameraDisplay tel_id = sorted(event.r1.tels_with_data)[1] sub = source.subarray geometry = sub.tel[tel_id].camera.geometry image = event.dl1.tel[tel_id].image disp = CameraDisplay(geometry, image=image) from ctapipe.image import tailcuts_clean, hillas_parameters mask = tailcuts_clean(geometry, image, picture_thresh=10, boundary_thresh=5, min_number_picture_neighbors=2) cleaned = image.copy() cleaned[~mask] = 0 disp = CameraDisplay(geometry, image=cleaned) params = hillas_parameters(geometry, cleaned) print(params) params params = hillas_parameters(geometry, cleaned) plt.figure(figsize=(10,10)) disp = CameraDisplay(geometry, image=image) disp.add_colorbar() disp.overlay_moments(params, color='red', lw=3) disp.highlight_pixels(mask, color='white', alpha=0.3, linewidth=2) plt.xlim(params.x.to_value(u.m) - 0.5, params.x.to_value(u.m) + 0.5) plt.ylim(params.y.to_value(u.m) - 0.5, params.y.to_value(u.m) + 0.5) source.metadata ###Output _____no_output_____ ###Markdown More complex image processing:Let's now explore how stereo reconstruction works. first, look at a summed image from multiple telescopesFor this, we want to use a `CameraDisplay` again, but since we can't sum and display images with different cameras, we'll just sub-select images from a particular camera typeThese are the telescopes that are in this event: ###Code tels_in_event = set(event.dl1.tel.keys()) # use a set here, so we can intersect it later tels_in_event cam_ids = set(sub.get_tel_ids_for_type("MST_MST_FlashCam")) cams_in_event = tels_in_event.intersection(cam_ids) first_tel_id = list(cams_in_event)[0] tel = sub.tel[first_tel_id] print("{}s in event: {}".format(tel, cams_in_event)) ###Output _____no_output_____ ###Markdown Now, let's sum those images: ###Code image_sum = np.zeros_like(tel.camera.geometry.pix_x.value) # just make an array of 0's in the same shape as the camera for tel_id in cams_in_event: image_sum += event.dl1.tel[tel_id].image ###Output _____no_output_____ ###Markdown And finally display the sum of those images ###Code plt.figure(figsize=(8,8)) disp = CameraDisplay(tel.camera.geometry, image=image_sum) disp.overlay_moments(params, with_label=False) plt.title("Sum of {}x {}".format(len(cams_in_event), tel)) ###Output _____no_output_____ ###Markdown let's also show which telescopes those were. Note that currently ArrayDisplay's value field is a vector by `tel_index`, not `tel_id`, so we have to convert to a tel_index. (this may change in a future version to be more user-friendly) ###Code from ctapipe.visualization import ArrayDisplay nectarcam_subarray = sub.select_subarray("FlashCam", cam_ids) hit_pattern = np.zeros(shape=nectarcam_subarray.num_tels) hit_pattern[[nectarcam_subarray.tel_indices[x] for x in cams_in_event ]] = 100 plt.set_cmap(plt.cm.Accent) plt.figure(figsize=(8,8)) ad = ArrayDisplay(nectarcam_subarray) ad.values = hit_pattern ad.add_labels() ###Output _____no_output_____ ###Markdown Explore Calibrated Data ###Code import ctapipe from ctapipe.utils.datasets import get_dataset_path from ctapipe.io import event_source, EventSeeker from ctapipe.visualization import CameraDisplay from ctapipe.instrument import CameraGeometry from matplotlib import pyplot as plt from astropy import units as u import numpy as np %matplotlib inline plt.style.use("ggplot") print(ctapipe.__version__) print(ctapipe.__file__) ###Output _____no_output_____ ###Markdown Let's first open a raw event file and get an event out of it: ###Code filename = get_dataset_path("gamma_test_large.simtel.gz") source = event_source(filename, max_events=2) for event in source: print(event.r0.event_id) filename source event print(event.r1) ###Output _____no_output_____ ###Markdown Perform basic calibration:Here we will use a `CameraCalibrator` which is just a simple wrapper that runs the three calibraraton and trace-integration phases of the pipeline, taking the data from levels: **R0** &rightarrow; **R1** &rightarrow; **DL0** &rightarrow; **DL1**You could of course do these each separately, by using the classes `R1Calibrator`, `DL0Reducer`, and `DL1Calibrator`.Note that we have not specified any configuration to the `CameraCalibrator`, so it will be using the default algorithms and thresholds, other than specifying that the product is a "HESSIOR1Calibrator" (hopefully in the near future that will be automatic). ###Code from ctapipe.calib import CameraCalibrator calib = CameraCalibrator(r1_product="HESSIOR1Calibrator") calib.calibrate(event) ###Output _____no_output_____ ###Markdown Now the *r1*, *dl0* and *dl1* containers are filled in the event* **r1.tel[x]**: contains the "r1-calibrated" waveforms, after gain-selection, pedestal subtraciton, and gain-correction* **dl0.tel[x]**: is the same but with optional data volume reduction (some pixels not filled), in this case this is not performed by default, so it is the same as r1* **dl1.tel[x]**: contains the (possibly re-calibrated) waveforms as dl0, but also the time-integrated *image* that has been calculated using a `ImageExtractor` (a `NeighborPeakWindowSum` by default) ###Code for tel_id in event.dl1.tel: print("TEL{:03}: {}".format(tel_id, event.inst.subarray.tel[tel_id])) print(" - r0 wave shape : {}".format(event.r0.tel[tel_id].waveform.shape)) print(" - r1 wave shape : {}".format(event.r1.tel[tel_id].waveform.shape)) print(" - dl1 image shape : {}".format(event.dl1.tel[tel_id].image.shape)) ###Output _____no_output_____ ###Markdown Some image processing:Let's look at the image ###Code from ctapipe.visualization import CameraDisplay tel_id = sorted(event.r1.tels_with_data)[1] sub = event.inst.subarray camera = sub.tel[tel_id].camera image = event.dl1.tel[tel_id].image[0] disp = CameraDisplay(camera, image=image) from ctapipe.image import tailcuts_clean, hillas_parameters mask = tailcuts_clean(camera, image, picture_thresh=10, boundary_thresh=5, min_number_picture_neighbors=2) cleaned = image.copy() cleaned[~mask] = 0 disp = CameraDisplay(camera, image=cleaned) params = hillas_parameters(camera, cleaned) print(params) params params = hillas_parameters(camera, cleaned) plt.figure(figsize=(10,10)) disp = CameraDisplay(camera, image=image) disp.add_colorbar() disp.overlay_moments(params, color='red', lw=3) disp.highlight_pixels(mask, color='white', alpha=0.3, linewidth=2) plt.xlim(params.x.to_value(u.m) - 0.5, params.x.to_value(u.m) + 0.5) plt.ylim(params.y.to_value(u.m) - 0.5, params.y.to_value(u.m) + 0.5) source.metadata ###Output _____no_output_____ ###Markdown More complex image processing:Let's now explore how stereo reconstruction works. first, look at a summed image from multiple telescopesFor this, we want to use a `CameraDisplay` again, but since we can't sum and display images with different cameras, we'll just sub-select images from a particular camera typeThese are the telescopes that are in this event: ###Code tels_in_event = set(event.dl1.tel.keys()) # use a set here, so we can intersect it later tels_in_event cam_ids = set(sub.get_tel_ids_for_type("MST:FlashCam")) cams_in_event = tels_in_event.intersection(cam_ids) first_tel_id = list(cams_in_event)[0] tel = sub.tel[first_tel_id] print("{}s in event: {}".format(tel, cams_in_event)) ###Output _____no_output_____ ###Markdown Now, let's sum those images: ###Code image_sum = np.zeros_like(tel.camera.pix_x.value) # just make an array of 0's in the same shape as the camera for tel_id in cams_in_event: image_sum += event.dl1.tel[tel_id].image[0] ###Output _____no_output_____ ###Markdown And finally display the sum of those images ###Code plt.figure(figsize=(8,8)) disp = CameraDisplay(tel.camera, image=image_sum) disp.overlay_moments(params, with_label=False) plt.title("Sum of {}x {}".format(len(cams_in_event), tel)) ###Output _____no_output_____ ###Markdown let's also show which telescopes those were. Note that currently ArrayDisplay's value field is a vector by `tel_index`, not `tel_id`, so we have to convert to a tel_index. (this may change in a future version to be more user-friendly) ###Code from ctapipe.visualization import ArrayDisplay nectarcam_subarray = sub.select_subarray("FlashCam", cam_ids) hit_pattern = np.zeros(shape=nectarcam_subarray.num_tels) hit_pattern[[nectarcam_subarray.tel_indices[x] for x in cams_in_event ]] = 100 plt.set_cmap(plt.cm.Accent) plt.figure(figsize=(8,8)) ad = ArrayDisplay(nectarcam_subarray) ad.values = hit_pattern ad.add_labels() ###Output _____no_output_____ ###Markdown Explore Calibrated Data ###Code import ctapipe from ctapipe.utils.datasets import get_dataset_path from ctapipe.io import EventSource, EventSeeker from ctapipe.visualization import CameraDisplay from ctapipe.instrument import CameraGeometry from matplotlib import pyplot as plt from astropy import units as u import numpy as np %matplotlib inline plt.style.use("ggplot") print(ctapipe.__version__) print(ctapipe.__file__) ###Output _____no_output_____ ###Markdown Let's first open a raw event file and get an event out of it: ###Code filename = get_dataset_path("gamma_prod5.simtel.zst") source = EventSource(filename, max_events=2) for event in source: print(event.index.event_id) filename source event print(event.r1) ###Output _____no_output_____ ###Markdown Perform basic calibration:Here we will use a `CameraCalibrator` which is just a simple wrapper that runs the three calibraraton and trace-integration phases of the pipeline, taking the data from levels: **R0** &rightarrow; **R1** &rightarrow; **DL0** &rightarrow; **DL1**You could of course do these each separately, by using the classes `R1Calibrator`, `DL0Reducer`, and `DL1Calibrator`.Note that we have not specified any configuration to the `CameraCalibrator`, so it will be using the default algorithms and thresholds, other than specifying that the product is a "HESSIOR1Calibrator" (hopefully in the near future that will be automatic). ###Code from ctapipe.calib import CameraCalibrator calib = CameraCalibrator(subarray=source.subarray) calib(event) ###Output _____no_output_____ ###Markdown Now the *r1*, *dl0* and *dl1* containers are filled in the event* **r1.tel[x]**: contains the "r1-calibrated" waveforms, after gain-selection, pedestal subtraciton, and gain-correction* **dl0.tel[x]**: is the same but with optional data volume reduction (some pixels not filled), in this case this is not performed by default, so it is the same as r1* **dl1.tel[x]**: contains the (possibly re-calibrated) waveforms as dl0, but also the time-integrated *image* that has been calculated using a `ImageExtractor` (a `NeighborPeakWindowSum` by default) ###Code for tel_id in event.dl1.tel: print("TEL{:03}: {}".format(tel_id, source.subarray.tel[tel_id])) print(" - r0 wave shape : {}".format(event.r0.tel[tel_id].waveform.shape)) print(" - r1 wave shape : {}".format(event.r1.tel[tel_id].waveform.shape)) print(" - dl1 image shape : {}".format(event.dl1.tel[tel_id].image.shape)) ###Output _____no_output_____ ###Markdown Some image processing:Let's look at the image ###Code from ctapipe.visualization import CameraDisplay tel_id = sorted(event.r1.tel.keys())[1] sub = source.subarray geometry = sub.tel[tel_id].camera.geometry image = event.dl1.tel[tel_id].image disp = CameraDisplay(geometry, image=image) from ctapipe.image import tailcuts_clean, hillas_parameters mask = tailcuts_clean(geometry, image, picture_thresh=10, boundary_thresh=5, min_number_picture_neighbors=2) cleaned = image.copy() cleaned[~mask] = 0 disp = CameraDisplay(geometry, image=cleaned) params = hillas_parameters(geometry, cleaned) print(params) params params = hillas_parameters(geometry, cleaned) plt.figure(figsize=(10,10)) disp = CameraDisplay(geometry, image=image) disp.add_colorbar() disp.overlay_moments(params, color='red', lw=3) disp.highlight_pixels(mask, color='white', alpha=0.3, linewidth=2) plt.xlim(params.x.to_value(u.m) - 0.5, params.x.to_value(u.m) + 0.5) plt.ylim(params.y.to_value(u.m) - 0.5, params.y.to_value(u.m) + 0.5) source.metadata ###Output _____no_output_____ ###Markdown More complex image processing:Let's now explore how stereo reconstruction works. first, look at a summed image from multiple telescopesFor this, we want to use a `CameraDisplay` again, but since we can't sum and display images with different cameras, we'll just sub-select images from a particular camera typeThese are the telescopes that are in this event: ###Code tels_in_event = set(event.dl1.tel.keys()) # use a set here, so we can intersect it later tels_in_event cam_ids = set(sub.get_tel_ids_for_type("MST_MST_NectarCam")) cam_ids cams_in_event = tels_in_event.intersection(cam_ids) first_tel_id = list(cams_in_event)[0] tel = sub.tel[first_tel_id] print("{}s in event: {}".format(tel, cams_in_event)) ###Output _____no_output_____ ###Markdown Now, let's sum those images: ###Code image_sum = np.zeros_like(tel.camera.geometry.pix_x.value) # just make an array of 0's in the same shape as the camera for tel_id in cams_in_event: image_sum += event.dl1.tel[tel_id].image ###Output _____no_output_____ ###Markdown And finally display the sum of those images ###Code plt.figure(figsize=(8,8)) disp = CameraDisplay(tel.camera.geometry, image=image_sum) disp.overlay_moments(params, with_label=False) plt.title("Sum of {}x {}".format(len(cams_in_event), tel)) ###Output _____no_output_____ ###Markdown let's also show which telescopes those were. Note that currently ArrayDisplay's value field is a vector by `tel_index`, not `tel_id`, so we have to convert to a tel_index. (this may change in a future version to be more user-friendly) ###Code from ctapipe.visualization import ArrayDisplay nectarcam_subarray = sub.select_subarray(cam_ids, name="NectarCam") hit_pattern = np.zeros(shape=nectarcam_subarray.num_tels) hit_pattern[[nectarcam_subarray.tel_indices[x] for x in cams_in_event ]] = 100 plt.set_cmap(plt.cm.Accent) plt.figure(figsize=(8,8)) ad = ArrayDisplay(nectarcam_subarray) ad.values = hit_pattern ad.add_labels() ###Output _____no_output_____ ###Markdown Explore Calibrated Data ###Code import ctapipe from ctapipe.utils.datasets import get_dataset_path from ctapipe.io import event_source, EventSeeker from ctapipe.visualization import CameraDisplay from ctapipe.instrument import CameraGeometry from matplotlib import pyplot as plt from astropy import units as u import numpy as np %matplotlib inline plt.style.use("ggplot") print(ctapipe.__version__) print(ctapipe.__file__) ###Output _____no_output_____ ###Markdown Let's first open a raw event file and get an event out of it: ###Code filename = get_dataset_path("gamma_test_large.simtel.gz") source = event_source(filename, max_events=2) for event in source: print(event.r0.event_id) filename source event print(event.r1) ###Output _____no_output_____ ###Markdown Perform basic calibration:Here we will use a `CameraCalibrator` which is just a simple wrapper that runs the three calibraraton and trace-integration phases of the pipeline, taking the data from levels: **R0** &rightarrow; **R1** &rightarrow; **DL0** &rightarrow; **DL1**You could of course do these each separately, by using the classes `R1Calibrator`, `DL0Reducer`, and `DL1Calibrator`.Note that we have not specified any configuration to the `CameraCalibrator`, so it will be using the default algorithms and thresholds, other than specifying that the product is a "HESSIOR1Calibrator" (hopefully in the near future that will be automatic). ###Code from ctapipe.calib import CameraCalibrator calib = CameraCalibrator(subarray=source.subarray) calib(event) ###Output _____no_output_____ ###Markdown Now the *r1*, *dl0* and *dl1* containers are filled in the event* **r1.tel[x]**: contains the "r1-calibrated" waveforms, after gain-selection, pedestal subtraciton, and gain-correction* **dl0.tel[x]**: is the same but with optional data volume reduction (some pixels not filled), in this case this is not performed by default, so it is the same as r1* **dl1.tel[x]**: contains the (possibly re-calibrated) waveforms as dl0, but also the time-integrated *image* that has been calculated using a `ImageExtractor` (a `NeighborPeakWindowSum` by default) ###Code for tel_id in event.dl1.tel: print("TEL{:03}: {}".format(tel_id, event.inst.subarray.tel[tel_id])) print(" - r0 wave shape : {}".format(event.r0.tel[tel_id].waveform.shape)) print(" - r1 wave shape : {}".format(event.r1.tel[tel_id].waveform.shape)) print(" - dl1 image shape : {}".format(event.dl1.tel[tel_id].image.shape)) ###Output _____no_output_____ ###Markdown Some image processing:Let's look at the image ###Code from ctapipe.visualization import CameraDisplay tel_id = sorted(event.r1.tels_with_data)[1] sub = event.inst.subarray geometry = sub.tel[tel_id].camera.geometry image = event.dl1.tel[tel_id].image disp = CameraDisplay(geometry, image=image) from ctapipe.image import tailcuts_clean, hillas_parameters mask = tailcuts_clean(geometry, image, picture_thresh=10, boundary_thresh=5, min_number_picture_neighbors=2) cleaned = image.copy() cleaned[~mask] = 0 disp = CameraDisplay(geometry, image=cleaned) params = hillas_parameters(geometry, cleaned) print(params) params params = hillas_parameters(geometry, cleaned) plt.figure(figsize=(10,10)) disp = CameraDisplay(geometry, image=image) disp.add_colorbar() disp.overlay_moments(params, color='red', lw=3) disp.highlight_pixels(mask, color='white', alpha=0.3, linewidth=2) plt.xlim(params.x.to_value(u.m) - 0.5, params.x.to_value(u.m) + 0.5) plt.ylim(params.y.to_value(u.m) - 0.5, params.y.to_value(u.m) + 0.5) source.metadata ###Output _____no_output_____ ###Markdown More complex image processing:Let's now explore how stereo reconstruction works. first, look at a summed image from multiple telescopesFor this, we want to use a `CameraDisplay` again, but since we can't sum and display images with different cameras, we'll just sub-select images from a particular camera typeThese are the telescopes that are in this event: ###Code tels_in_event = set(event.dl1.tel.keys()) # use a set here, so we can intersect it later tels_in_event cam_ids = set(sub.get_tel_ids_for_type("MST_MST_FlashCam")) cams_in_event = tels_in_event.intersection(cam_ids) first_tel_id = list(cams_in_event)[0] tel = sub.tel[first_tel_id] print("{}s in event: {}".format(tel, cams_in_event)) ###Output _____no_output_____ ###Markdown Now, let's sum those images: ###Code image_sum = np.zeros_like(tel.camera.geometry.pix_x.value) # just make an array of 0's in the same shape as the camera for tel_id in cams_in_event: image_sum += event.dl1.tel[tel_id].image ###Output _____no_output_____ ###Markdown And finally display the sum of those images ###Code plt.figure(figsize=(8,8)) disp = CameraDisplay(tel.camera.geometry, image=image_sum) disp.overlay_moments(params, with_label=False) plt.title("Sum of {}x {}".format(len(cams_in_event), tel)) ###Output _____no_output_____ ###Markdown let's also show which telescopes those were. Note that currently ArrayDisplay's value field is a vector by `tel_index`, not `tel_id`, so we have to convert to a tel_index. (this may change in a future version to be more user-friendly) ###Code from ctapipe.visualization import ArrayDisplay nectarcam_subarray = sub.select_subarray("FlashCam", cam_ids) hit_pattern = np.zeros(shape=nectarcam_subarray.num_tels) hit_pattern[[nectarcam_subarray.tel_indices[x] for x in cams_in_event ]] = 100 plt.set_cmap(plt.cm.Accent) plt.figure(figsize=(8,8)) ad = ArrayDisplay(nectarcam_subarray) ad.values = hit_pattern ad.add_labels() ###Output _____no_output_____ ###Markdown Explore Calibrated Data ###Code import ctapipe from ctapipe.utils.datasets import get_dataset_path from ctapipe.io import event_source, EventSeeker from ctapipe.visualization import CameraDisplay from ctapipe.instrument import CameraGeometry from matplotlib import pyplot as plt from astropy import units as u import numpy as np %matplotlib inline plt.style.use("ggplot") print(ctapipe.__version__) print(ctapipe.__file__) ###Output _____no_output_____ ###Markdown Let's first open a raw event file and get an event out of it: ###Code filename = get_dataset_path("gamma_test_large.simtel.gz") source = event_source(filename, max_events=2) for event in source: print(event.r0.event_id) filename source event print(event.r1) ###Output _____no_output_____ ###Markdown Perform basic calibration:Here we will use a `CameraCalibrator` which is just a simple wrapper that runs the three calibraraton and trace-integration phases of the pipeline, taking the data from levels: **R0** &rightarrow; **R1** &rightarrow; **DL0** &rightarrow; **DL1**You could of course do these each separately, by using the classes `R1Calibrator`, `DL0Reducer`, and `DL1Calibrator`.Note that we have not specified any configuration to the `CameraCalibrator`, so it will be using the default algorithms and thresholds, other than specifying that the product is a "HESSIOR1Calibrator" (hopefully in the near future that will be automatic). ###Code from ctapipe.calib import CameraCalibrator calib = CameraCalibrator() calib.calibrate(event) ###Output _____no_output_____ ###Markdown Now the *r1*, *dl0* and *dl1* containers are filled in the event* **r1.tel[x]**: contains the "r1-calibrated" waveforms, after gain-selection, pedestal subtraciton, and gain-correction* **dl0.tel[x]**: is the same but with optional data volume reduction (some pixels not filled), in this case this is not performed by default, so it is the same as r1* **dl1.tel[x]**: contains the (possibly re-calibrated) waveforms as dl0, but also the time-integrated *image* that has been calculated using a `ImageExtractor` (a `NeighborPeakWindowSum` by default) ###Code for tel_id in event.dl1.tel: print("TEL{:03}: {}".format(tel_id, event.inst.subarray.tel[tel_id])) print(" - r0 wave shape : {}".format(event.r0.tel[tel_id].waveform.shape)) print(" - r1 wave shape : {}".format(event.r1.tel[tel_id].waveform.shape)) print(" - dl1 image shape : {}".format(event.dl1.tel[tel_id].image.shape)) ###Output _____no_output_____ ###Markdown Some image processing:Let's look at the image ###Code from ctapipe.visualization import CameraDisplay tel_id = sorted(event.r1.tels_with_data)[1] sub = event.inst.subarray camera = sub.tel[tel_id].camera image = event.dl1.tel[tel_id].image[0] disp = CameraDisplay(camera, image=image) from ctapipe.image import tailcuts_clean, hillas_parameters mask = tailcuts_clean(camera, image, picture_thresh=10, boundary_thresh=5, min_number_picture_neighbors=2) cleaned = image.copy() cleaned[~mask] = 0 disp = CameraDisplay(camera, image=cleaned) params = hillas_parameters(camera, cleaned) print(params) params params = hillas_parameters(camera, cleaned) plt.figure(figsize=(10,10)) disp = CameraDisplay(camera, image=image) disp.add_colorbar() disp.overlay_moments(params, color='red', lw=3) disp.highlight_pixels(mask, color='white', alpha=0.3, linewidth=2) plt.xlim(params.x.to_value(u.m) - 0.5, params.x.to_value(u.m) + 0.5) plt.ylim(params.y.to_value(u.m) - 0.5, params.y.to_value(u.m) + 0.5) source.metadata ###Output _____no_output_____ ###Markdown More complex image processing:Let's now explore how stereo reconstruction works. first, look at a summed image from multiple telescopesFor this, we want to use a `CameraDisplay` again, but since we can't sum and display images with different cameras, we'll just sub-select images from a particular camera typeThese are the telescopes that are in this event: ###Code tels_in_event = set(event.dl1.tel.keys()) # use a set here, so we can intersect it later tels_in_event cam_ids = set(sub.get_tel_ids_for_type("MST:FlashCam")) cams_in_event = tels_in_event.intersection(cam_ids) first_tel_id = list(cams_in_event)[0] tel = sub.tel[first_tel_id] print("{}s in event: {}".format(tel, cams_in_event)) ###Output _____no_output_____ ###Markdown Now, let's sum those images: ###Code image_sum = np.zeros_like(tel.camera.pix_x.value) # just make an array of 0's in the same shape as the camera for tel_id in cams_in_event: image_sum += event.dl1.tel[tel_id].image[0] ###Output _____no_output_____ ###Markdown And finally display the sum of those images ###Code plt.figure(figsize=(8,8)) disp = CameraDisplay(tel.camera, image=image_sum) disp.overlay_moments(params, with_label=False) plt.title("Sum of {}x {}".format(len(cams_in_event), tel)) ###Output _____no_output_____ ###Markdown let's also show which telescopes those were. Note that currently ArrayDisplay's value field is a vector by `tel_index`, not `tel_id`, so we have to convert to a tel_index. (this may change in a future version to be more user-friendly) ###Code from ctapipe.visualization import ArrayDisplay nectarcam_subarray = sub.select_subarray("FlashCam", cam_ids) hit_pattern = np.zeros(shape=nectarcam_subarray.num_tels) hit_pattern[[nectarcam_subarray.tel_indices[x] for x in cams_in_event ]] = 100 plt.set_cmap(plt.cm.Accent) plt.figure(figsize=(8,8)) ad = ArrayDisplay(nectarcam_subarray) ad.values = hit_pattern ad.add_labels() ###Output _____no_output_____
myexamples/pylab/BYORP5.ipynb
###Markdown code for BYORP calculation The surface thermal inertia is neglected, so that thermal radiation is re-emitted with no time lag, and the reflected and thermally radiated components are assumed Lambertian (isotropic) and so emitted with fluxparallel to the local surface normal. We ignore heat conduction. The surface is described with a closedtriangular mesh.The radiation force from the $i$-th facet is$$ {\bf F}_i = - \frac{F_\odot}{c} {S_i} (\hat {\bf n}_i \cdot \hat {\bf s}_\odot) \hat {\bf n}_i $$where $S_i$ is the area of the $i$-th facet and $\hat {\bf n}_i$ is its surface normal.Here $F_\odot$ is the solar radiation flux and $c$ is the speed of light.The direction of the Sun is $\hat {\bf s}_\odot$.The total Yarkovsky force is a sum over all the facets $${\bf F}_Y = \sum_{i: \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0} {\bf F}_i $$Only facets on the day side or with $\hat {\bf n}_i \cdot \hat {\bf s}_\odot >0$ are included in the sum.The torque affecting the binary orbit from a single facet is $$ {\boldsymbol \tau}_{i,B} = \begin{cases} - \frac{F_\odot}{c} {S_i} (\hat {\bf n}_i \cdot \hat {\bf s}_\odot) ( {\bf a}_B \times \hat {\bf n}_i) & \mbox{if } \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0 \\ 0 & \mbox{otherwise} \end{cases}$$where ${\bf a}_B$ is the secondary's radial vector from the binary center of mass.The torque affecting the binary orbit is the sum of the torques from each facet and should be an average over the orbit around the Sun and over the binary orbit and spin of the secondary.$$ {\boldsymbol \tau}_{BY} = \frac{1}{T} \int_0^T dt\ \sum_{i: \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0} {\boldsymbol \tau}_{i,B} $$If $\hat {\bf l}$ is the binary orbit normal then $$ {\boldsymbol \tau}_{BY} \cdot \hat {\bf l} $$ changes the binary's orbital angular momentum and causes binary orbit migration.The torque affecting the spin (also known as YORP) instantaneously depends on the radii of each facit ${\bf r}_i$ from the asteroid center of mass $$ {\boldsymbol \tau}_{i,s} = \begin{cases}- \frac{F_\odot}{c} {S_i} (\hat {\bf n}_i \cdot \hat {\bf s}_\odot) ({\bf r}_i \times \hat{\bf n}_i) & \mbox{if } \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0 \\0 & \mbox{otherwise}\end{cases}$$$$ {\boldsymbol \tau}_Y = \frac{1}{T} \int_0^T dt \ \sum_{i: \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0} {\boldsymbol \tau}_{i,s} $$where the average is done over the orbit about the Sun and the spin of the asteroid.If the spin axis is $\hat {\boldsymbol \omega}$ then $$ {\boldsymbol \tau}_Y \cdot \hat {\boldsymbol \omega} $$ gives the body spin up or spin down rate.In practice we average over the Sun's directions first and then average over spin (for YORP) or and spin and binary orbit direction (for BYORP) afterward. Units For our calculation are $F_\odot/c = 1$.For YORP $R=1$.For BYORP $a_B = 1$ and $R=1$ (in the surface area).Here $R$ is volume equivalent sphere radius.To put in physical units: Multiply ${\boldsymbol \tau}_Y$ by $\frac{F_\odot R^3}{c}$.Multiply ${\boldsymbol \tau}_{BY}$ by $\frac{F_\odot R^2 a_B}{c}$.Alternatively we are computing:${\boldsymbol \tau}_Y \times \frac{c}{F_\odot R^3} $ ${\boldsymbol \tau}_{BY} \times \frac{c}{F_\odot R^2 a_B} $ To get the rate the spin changes for YORP$\dot \omega = \frac{ {\boldsymbol \tau}_Y \cdot \hat {\bf s} }{C} $where $C$ is the moment of inertia about the spin axis.To order of magnitude what we are computing can be multiplied by $\frac{F_\odot R^3}{c MR^2} $ to estimate $\dot \omega$and by $\frac{F_\odot R^3}{c MR^2 \omega} $to estimate $\dot \epsilon$. To get the rate that obliquity changes for YORP$\dot \epsilon = \frac{ {\boldsymbol \tau}_Y \cdot \hat {\boldsymbol \phi} }{C \omega} $where unit vector $\hat {\boldsymbol \phi}$ is in the xy plane (ecliptic) and is perpendicular to the spin axis.To get the semi-major axis drift rate for BYORP$ \dot a_B = \frac{2 {\boldsymbol \tau}_{BY} \cdot \hat {\bf l}}{M n_Ba_B} $where $M$ is the secondary mass, $n_B$ and $a_B$ are binary orbit mean motion and semi-major axis.To order of magnitude to get the drift rate we multiply what we are getting by $\frac{F_\odot R^2 a_B}{c} \times \frac{1}{M n_B a_B}$.Dimensionless numbers used by Steiberg+10 (eqns 19,48)$f_{Y} \equiv \tau_{Y} \frac{3}{2} \frac{c}{\pi R^3 F_\odot}$$f_{BY} \equiv \tau_{BY} \frac{3}{2} \frac{c}{\pi R^2 a_B F_\odot}$Our computed values are the same as theirs except for a factor of 3/2 (but they have a 2/3 in their torque) and a factor of $\pi$.We need to divide by $\pi$ to have values consistent with theirs. Assumptions:Circular orbit for binary.Circuilar orbit for binary around Sun.No shadows.No conduction. Lambertian isotropic emission. No thermal lag.We neglect distance of facet centroids from secondary center of mass when computing BYORP. Coordinate system:binary orbit is kept in xy plane Compare YORP on primary to BYORP on secondary.$\frac{\tau_{Yp}}{\tau_{BY} }\sim \frac{R_p^2 }{R_s^2 } \frac{R_p }{a_B }\frac{f_Y}{ f_{BY}}$For Didymos, this is about $8 f_Y/f_{BY}$. ###Code random.seed(1) print(random.uniform(-1,1)) # perturb a sphere (mesh, premade) and stretch it so that # it becomes an ellipsoid. # We can't directly edit vertices or faces # see this: https://github.com/PyMesh/PyMesh/issues/156 # the work around is to copy the entire mesh after modifying it # arguments: # devrand, Randomly add devrand to x,y,z positions of each vertex # a uniform ditns in [-1,1] is used # aratio1 and aratio2, stretch or compress a sphere by aratio1 and aratio2 # returns: a new mesh # we assume that longest semi-major axis a is along x, # medium semi-axis b is along y, semi-minor c axis is along z # Volume should stay the same! def sphere_perturb(sphere,devrand,aratio1,aratio2): #devrand = 0.05 # how far to perturb each vertex nv = len(sphere.vertices) f = sphere.faces v = np.copy(sphere.vertices) # add perturbations to x,y,z to all vertices for i in range(nv): dx = devrand*random.uniform(-1,1) dy = devrand*random.uniform(-1,1) dz = devrand*random.uniform(-1,1) v[i,0] += dx v[i,1] += dy v[i,2] += dz # aratio1 = c/a this gives c = aratio1*a # aratio2 = b/a this gives b = aratio2*a # volume = 4/3 pi a*b*c for an ellipsoid # vol = 1*aratio1*aratio2 # rad_cor = pow(vol,-1./3.) # v[:,2] *= aratio1*rad_cor # make oblate, adjusts z coords # v[:,1] *= aratio2*rad_cor # make elongated in xy plane , adjusts y coords # v[:,0] *= rad_cor # adjusts x coords # volume should now stay the same sub_com(v) # subtract center of mass from vertex positions psphere = pymesh.form_mesh(v, f) psphere.add_attribute("face_area") psphere.add_attribute("face_normal") psphere.add_attribute("face_centroid") sbody = body_stretch(psphere,aratio1,aratio2) # do the stretching return sbody # stretch a mesh body by axis ratios # arguments: # body: mesh # aratio1: c/a # aratio2: b/a # returns: a new mesh # we assume that longest semi-major axis a is along x, # medium semi-axis b is along y, semi-minor c axis is along z # Volume should stay the same! def body_stretch(body,aratio1,aratio2): nv = len(body.vertices) f = body.faces v = np.copy(body.vertices) # aratio1 = c/a this gives c = aratio1*a # aratio2 = b/a this gives b = aratio2*a # volume = 4/3 pi a*b*c for an ellipsoid vol = 1*aratio1*aratio2 rad_cor = pow(vol,-1./3.) v[:,2] *= aratio1*rad_cor # make oblate, adjusts z coords v[:,1] *= aratio2*rad_cor # make elongated in xy plane , adjusts y coords v[:,0] *= rad_cor # adjusts x coords # volume should now stay the same sub_com(v) # subtract center of mass from vertex positions sbody = pymesh.form_mesh(v, f) sbody.add_attribute("face_area") sbody.add_attribute("face_normal") sbody.add_attribute("face_centroid") return sbody # substract the center of mass from a list of vertices def sub_com(v): nv = len(v) xsum = np.sum(v[:,0]) ysum = np.sum(v[:,1]) zsum = np.sum(v[:,2]) xmean = xsum/nv ymean = ysum/nv zmean = zsum/nv v[:,0]-= xmean v[:,1]-= ymean v[:,2]-= zmean # compute surface area by summing area of all facets # divide by 4pi which is the surface area of a sphere with radius 1 def surface_area(mesh): #f = mesh.faces S_i = mesh.get_face_attribute('face_area') area =np.sum(S_i) return area/(4*np.pi) # print number of faces def nf_mesh(mesh): f = mesh.faces print('number of faces ',len(f)) # meshplot with a bounding box def plt_mesh(vertices,faces,xmax): m = np.array([-xmax,-xmax,-xmax]) ma = np.abs(m) # Corners of the bounding box v_box = np.array([[m[0], m[1], m[2]], [ma[0], m[1], m[2]], [ma[0], ma[1], m[2]], [m[0], ma[1], m[2]], [m[0], m[1], ma[2]], [ma[0], m[1], ma[2]], [ma[0], ma[1], ma[2]], [m[0], ma[1], ma[2]]]) # Edges of the bounding box f_box = np.array([[0, 1], [1, 2], [2, 3], [3, 0], [4, 5], [5, 6], [6, 7], [7, 4], [0, 4], [1, 5], [2, 6], [7, 3]], dtype=np.int) p = meshplot.plot(vertices, faces, return_plot=True) # plot body p.add_edges(v_box, f_box, shading={"line_color": "red"}); #p.add_points(v_box, shading={"point_color": "green"}) return p # meshplot with a bounding square def plt_mesh_square(vertices,faces,xmax): m = np.array([-xmax,-xmax,-xmax]) ma = np.abs(m) # Corners of the bounding box v_box = np.array([[-xmax, -xmax, 0], [-xmax, xmax, 0], [xmax, xmax,0] , [xmax, -xmax, 0]]) # Edges of the bounding box f_box = np.array([[0, 1], [1, 2], [2, 3], [3, 0]], dtype=np.int) p = meshplot.plot(vertices, faces, return_plot=True) # plot body p.add_edges(v_box, f_box, shading={"line_color": "red"}); #p.add_points(v_box, shading={"point_color": "green"}) return p # perform a rotation on a vertex list and return a new set of rotated vertices # rotate about axis and via angle in radians def rotate_vertices(vertices,axis,angle): qs = pymesh.Quaternion.fromAxisAngle(axis, angle) v = np.copy(vertices) nv = len(v) # loop over all vertices and do two rotations for i in range(nv): v[i] = qs.rotate(v[i]) # perform rotation return v # compute the volume of the tetrahedron formed from face with index iface # and the origin def vol_i(mesh,iface): f = mesh.faces v = mesh.vertices iv1 = f[iface,0] # indexes of the 3 vertices iv2 = f[iface,1] iv3 = f[iface,2] #print(iv1,iv2,iv3) v1 = v[iv1] # the 3 vertices v2 = v[iv2] v3 = v[iv3] #print(v1,v2,v3) mat = np.array([v1,v2,v3]) # the volume is equal to 1/6 determinant of the matrix formed with the three vertices # https://en.wikipedia.org/wiki/Tetrahedron #print(mat) vol = np.linalg.det(mat)/6.0 # compute determinant return vol # compute the volume of the mesh by looping over all tetrahedrons formed from the faces # we assume that the body is convex def volume_mesh(mesh): f = mesh.faces nf = len(f) vol = 0.0 for iface in range(nf): vol += vol_i(mesh,iface) return vol # if vol equ radius is 1 the volume should be equal to 4*np.pi/3 which is 4.1888 # tests #vi = vol_i(squannit,1) #print(vi) #vtot = volume_mesh(squannit) #print(vtot) # correct all the radii so that the volume becomes that of a sphere with radius 1 # return a new mesh def cor_volume(mesh): vol = volume_mesh(mesh) print('Volume {:.4f}'.format(vol)) rad = pow(vol*3/(4*np.pi),1.0/3.0) print('radius of vol equ sphere {:.4f}'.format(rad)) f = mesh.faces v = np.copy(mesh.vertices) v /= rad newmesh = pymesh.form_mesh(v, f) newmesh.add_attribute("face_area") newmesh.add_attribute("face_normal") newmesh.add_attribute("face_centroid") vol = volume_mesh(newmesh) print('new Volume {:.3f}'.format(vol)) return newmesh # compute the radiation force instantaneously on a triangular mesh for each facit # arguments: # mesh, the body (a triangular surface mesh) # s_hat is a 3 length np.array (a unit vector) pointing to the Sun # return the vector F_i for each facet # returns: F_i_x is the x component of F_i and is a vector that has the length of the number of faces # Force is zero if facets are not on the day side def F_i(mesh,s_hat): s_len = np.sqrt(s_hat[0]**2 + s_hat[1]**2 + s_hat[2]**2) # in case s_hat was not normalized #nf = len(mesh.faces) S_i = mesh.get_face_attribute('face_area') # vector of facet areas f_normal = mesh.get_face_attribute('face_normal') # vector of vector of facet normals # normal components nx = np.squeeze(f_normal[:,0]) # a vector, of length number of facets ny = np.squeeze(f_normal[:,1]) nz = np.squeeze(f_normal[:,2]) # dot product of n_i and s_hat n_dot_s = (nx*s_hat[0] + ny*s_hat[1] + nz*s_hat[2])/s_len # a vector F_i_x = -S_i*n_dot_s*nx # a vector, length number of facets F_i_y = -S_i*n_dot_s*ny F_i_z = -S_i*n_dot_s*nz ii = (n_dot_s <0) # the night sides F_i_x[ii] = 0 # get rid of night sides F_i_y[ii] = 0 F_i_z[ii] = 0 return F_i_x,F_i_y,F_i_z # these are each vectors for each face # compute radiation forces F_i for each face, but averaging over all positions of the Sun # a circular orbit for the asteroid is assumed # arguments: # nphi_Sun is the number of solar angles, evenly spaced in 2pi so we are assuming circular orbit # incl is solar orbit inclination in radians # returns: F_i_x average and other 2 components of forces for each facet def F_i_sun_ave(mesh,nphi_Sun,incl): dphi = 2*np.pi/nphi_Sun # compute the first set of forces so we have vectors the right length phi = 0.0 s_hat = np.array([np.cos(phi)*np.cos(incl),np.sin(phi)*np.cos(incl),np.sin(incl)]) # compute the radiation force instantaneously on the triangular mesh for sun at s_hat F_i_x_sum,F_i_y_sum,F_i_z_sum = F_i(mesh,s_hat) # now compute the forces for the rest of the solar angles for i in range(1,nphi_Sun): # do the rest of the angles phi = i*dphi s_hat = np.array([np.cos(phi)*np.cos(incl),np.sin(phi)*np.cos(incl),np.sin(incl)]) # compute the radiation force instantaneously on the triangular mesh for sun at s_hat F_i_x,F_i_y,F_i_z = F_i(mesh,s_hat) # These are vectors of length number of facets F_i_x_sum += F_i_x # sum up forces F_i_y_sum += F_i_y F_i_z_sum += F_i_z F_i_x_ave = F_i_x_sum/nphi_Sun # average F_i_y_ave = F_i_y_sum/nphi_Sun F_i_z_ave = F_i_z_sum/nphi_Sun return F_i_x_ave,F_i_y_ave,F_i_z_ave # these are vectors for each face # compute cross product C=AxB using components def cross_prod_xyz(Ax,Ay,Az,Bx,By,Bz): Cx = Ay*Bz - Az*By Cy = Az*Bx - Ax*Bz Cz = Ax*By - Ay*Bx return Cx,Cy,Cz # compute total Yorp torque averaging over nphi_Sun solar positions # this is at a single body orientation # a circular orbit is assumed # arguments: # mesh: the body # nphi_Sun is the number of solar angles # incl is solar orbit inclination in radians # returns: torque components def tau_Ys(mesh,nphi_Sun,incl): # compute F_i for each face, but averaging over all positions of the Sun F_i_x_ave, F_i_y_ave,F_i_z_ave = F_i_sun_ave(mesh,nphi_Sun,incl) r_i = mesh.get_face_attribute("face_centroid") # radii to each facet rx = np.squeeze(r_i[:,0]) # radius of centroid from center of mass ry = np.squeeze(r_i[:,1]) # these are vectors, length number of faces rz = np.squeeze(r_i[:,2]) # cross product works on vectors tau_i_x,tau_i_y,tau_i_z = cross_prod_xyz(rx,ry,rz,F_i_x_ave,F_i_y_ave,F_i_z_ave) #This is the torque from each day lit facet tau_x = np.sum(tau_i_x) # sum up forces from all faces tau_y = np.sum(tau_i_y) tau_z = np.sum(tau_i_z) return tau_x,tau_y,tau_z # these are numbers for torque components # compute total BYORP averaging over nphi_Sun solar positions # for a single binary vector a_bin and body position described with mesh # arguments: # incl is solar orbit inclination in radians # nphi_Sun is the number of solar angles # returns: torque components def tau_Bs(mesh,nphi_Sun,incl,a_bin): # compute F_i for each face, but averaging over all positions of the Sun F_i_x_ave, F_i_y_ave,F_i_z_ave = F_i_sun_ave(mesh,nphi_Sun,incl) # these are vectors length number of faces # forces from day lit faces F_x = np.sum(F_i_x_ave) #sum up the force F_y = np.sum(F_i_y_ave) F_z = np.sum(F_i_z_ave) a_x = a_bin[0] # binary direction a_y = a_bin[1] a_z = a_bin[2] tau_x,tau_y,tau_z = cross_prod_xyz(a_x,a_y,a_z,F_x,F_y,F_z) # cross product return tau_x,tau_y,tau_z # these are numbers that give the torque components # first rotate vertices in the mesh about the z axis by angle phi in radians # then tilt over the body by obliquity which is an angle in radians # arguments: # mesh, triangular surface mess for body # obliquity, angle in radius to tilt body z axis over # phi, angle in radians to spin/rotate body about its z axis # phi_prec, angle in randias that tilt is done, it's a precession angle # sets rotation axis for tilt, this axis is in the xy plane # returns: # new_mesh: the tilted/rotated mesh # zrot: the new z-body spin axis def tilt_obliq(mesh,obliquity,phi,phi_prec): f = mesh.faces v = np.copy(mesh.vertices) nv = len(v) # precession angle is phi_prec axist = np.array([np.cos(phi_prec),np.sin(phi_prec),0]) qt = pymesh.Quaternion.fromAxisAngle(axist, obliquity) zaxis = np.array([0,0,1]) zrot = qt.rotate(zaxis) # body principal axis will become zrot # spin rotation about now tilted principal body axis qs = pymesh.Quaternion.fromAxisAngle(zrot, phi) # loop over all vertices and do two rotations for i in range(nv): v[i] = qt.rotate(v[i]) # tilt it over v[i] = qs.rotate(v[i]) # spin new_mesh = pymesh.form_mesh(v, f) new_mesh.add_attribute("face_area") new_mesh.add_attribute("face_normal") new_mesh.add_attribute("face_centroid") return new_mesh,zrot # tilt,spin a body and compute binary direction, assuming tidally locked # arguments: # body: triangular surface mesh (in principal axis coordinate system) # nphi is the number of angles that could be done with indexing by iphi # obliquity: w.r.t to binary orbit angular momentum direction # iphi: number of rotations by dphi where dphi = 2pi/nphi # this is principal axis rotation about z axis # phi0: an offset for phi applied to body but not binary axis # phi_prec: a precession angle for tilting # returns: # tbody, a body rotated after iphi rotations by dphi and tilted by obliquity # a_bin, binary direction assuming same rotation rate, tidal lock # l_bin: binary orbit angular momentum orbital axis # zrot: spin axis direction def tilt_and_bin(body,obliquity,nphi,iphi,phi0,phi_prec): dphi = 2*np.pi/nphi phi = iphi*dphi tbody,zrot = tilt_obliq(body,obliquity,phi + phi0,phi_prec) # tilt and spin body a_bin = np.array([np.cos(phi),np.sin(phi),0.0]) # direction to binary l_bin = np.array([0,0,1.0]) # angular momentum axis of binary orbit return tbody,a_bin,l_bin,zrot # compute the YORP torque on body # arguments: # body: triangular surface mesh (in principal axis coordinate system) # nphi is number of body angles spin # nphi_Sun is the number of solar angles used # obliquity: angle of body w.r.t to Sun aka ecliptic pole # returns: # 3 torque components # torque dot spin axis so spin down rate can be computed # torque dot azimuthal unit vector so obliquity change rate can be computed def compute_Y(body,obliquity,nphi,nphi_Sun): incl = 0.0 # set Sun inclination to zero so obliquity is w.r.t solar orbit phi0 = 0 # offset in spin set to zero phi_prec=0 # precession angle set to zero tau_Y_x = 0.0 tau_Y_y = 0.0 tau_Y_z = 0.0 for iphi in range(nphi): # body positions # rotate the body and tilt it over tbody,a_bin,l_bin,zrot = tilt_and_bin(body,obliquity,nphi,iphi,phi0,phi_prec) # compute torques over solar positions tau_x,tau_y,tau_z = tau_Ys(tbody,nphi_Sun,incl) tau_Y_x += tau_x tau_Y_y += tau_y tau_Y_z += tau_z tau_Y_x /= nphi # average tau_Y_y /= nphi tau_Y_z /= nphi # compute component that affects spin-down/up rate, this is tau dot spin axis sx = zrot[0]; sy = zrot[1]; sz=zrot[2] tau_s = tau_Y_x*sx + tau_Y_y*sy + tau_Y_z*sz # we need a unit vector, phi_hat, that is in the xy plane, points in the azimuthal direction # and is perpendicular to the rotation axis spl = np.sqrt(sx**2 + sy**2) tau_o = 0 if (spl >0): phi_hat_x = sy/spl phi_hat_y = -sx/spl phi_hat_z = 0 tau_o = tau_Y_x*phi_hat_x + tau_Y_y*phi_hat_y+tau_Y_z*phi_hat_z # tau_o should tell us about obliquity change rate return tau_Y_x,tau_Y_y,tau_Y_z,tau_s,tau_o # compute the BYORP torque, for a tidally locked binary # arguments: # body: triangular surface mesh (in principal axis coordinate system) # nphi is the number of body angles we will use (spin) # obliquity is body tilt w.r.t to binary orbit # incl is solar orbit inclination # nphi_Sun is the number of solar angles used # phi0 an offset for body spin angle that is not applied to binary direction # phi_prec z-axis precession angle, relevant for obliquity # returns: # 3 torque components # torque dot l_bin so can compute binary orbit drift rate def compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec): tau_BY_x = 0.0 tau_BY_y = 0.0 tau_BY_z = 0.0 for iphi in range(nphi): # body positions # rotate the body and tilt it over, and find binary direction tbody,a_bin,l_bin,zrot = tilt_and_bin(body,obliquity,nphi,iphi,phi0,phi_prec) # a_bin is binary direction # compute torques over spin/body positions tau_x,tau_y,tau_z =tau_Bs(tbody,nphi_Sun,incl,a_bin) tau_BY_x += tau_x tau_BY_y += tau_y tau_BY_z += tau_z tau_BY_x /= nphi # average tau_BY_y /= nphi tau_BY_z /= nphi # compute component that affects binary orbit angular momentum # this is tau dot l_bin tau_l = tau_BY_x*l_bin[0] + tau_BY_y*l_bin[1] + tau_BY_z*l_bin[2] return tau_BY_x,tau_BY_y,tau_BY_z, tau_l # compute the YORP torque on body as a function of obliquity # here obliquity is w.r.t Sun # returns obliquity and torque arrays def obliq_Y_fig(body): nphi_Sun=36 # number of solar positions nphi = 36 # number of spin positions nobliq = 20 # number of obliquities dobliq = np.pi/nobliq tau_s_arr = np.zeros(nobliq) # to store torques tau_o_arr = np.zeros(nobliq) # to store torques o_arr = np.zeros(nobliq) # to store obliquities in degrees for i in range(nobliq): obliquity=i*dobliq tau_Y_x,tau_Y_y,tau_Y_z,tau_s,tau_o =compute_Y(body,obliquity,nphi,nphi_Sun) #print(tau_s) tau_s_arr[i] = tau_s tau_o_arr[i] = tau_o o_arr[i] = obliquity*180/np.pi return o_arr, tau_s_arr, tau_o_arr print(4*np.pi/3) # compute the BYORP torque on body as a function of inclination # for a given obliquity and precession angle # returns inclination and torque arrays def obliq_BY_fig(body,obliquity,phi_prec): phi0=0 nphi_Sun=36 # number of solar positions nphi = 36 # number of spin positions nincl = 20 # number of inclinations dincl = np.pi/nincl tau_l_arr = np.zeros(nincl) # to store torques i_arr = np.zeros(nincl) for i in range(nincl): incl=i*dincl tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec) i_arr[i] = incl*180/np.pi tau_l_arr[i] = tau_l return i_arr,tau_l_arr # compute the BYORP torque on body as a function of obliquity # for a given inclination and precession angle # returns obliquity and torque arrays def obliq_BY_fig2(body,incl,phi_prec): phi0=0 nphi_Sun=36 # number of solar positions nphi = 36 # number of spin positions nobliq = 60 # number of obliquities dobliq = np.pi/nobliq tau_l_arr = np.zeros(nobliq) # to store torques o_arr = np.zeros(nobliq) for i in range(nobliq): obliquity=i*dobliq tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec) o_arr[i] = obliquity*180/np.pi tau_l_arr[i] = tau_l return o_arr,tau_l_arr # compute the BYORP torque on body as a function of precession angle # for a given obliquity and inclination # returns precession angle and torque arrays def obliq_BY_fig3(body,obliquity,incl): phi0=0 nphi_Sun=36 # number of solar positions nphi = 36 # number of spin positions nprec = 30 # number of precession angles dprec = np.pi/nprec # only goes from 0 to pi tau_l_arr = np.zeros(nprec) # to store torques p_arr = np.zeros(nprec) for i in range(nprec): phi_prec=i*dprec tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec) p_arr[i] = phi_prec*180/np.pi tau_l_arr[i] = tau_l return p_arr,tau_l_arr # compute the BYORP torque on body as a function of libration angle phi0 # for a given obliquity and inclination and precession angle # returns libration angle and torque arrays def obliq_BY_fig4(body,obliquity,incl,phi_prec): phi0=0 nphi_Sun=36 # number of solar positions nphi = 36 # number of spin positions nlib = 20 # number of libration angles dlib = 0.5*np.pi/nlib # going from -pi/4 to pi/4 tau_l_arr = np.zeros(nlib) # to store torques l_arr = np.zeros(nlib) for i in range(nlib): phi0=i*dlib - np.pi/4 tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec) l_arr[i] = phi0*180/np.pi tau_l_arr[i] = tau_l return l_arr,tau_l_arr # compute the BYORP torque on body as a function of obliquity and precession angle # for a given inclination # returns 2D torque array and arrays for the axes so a contour or color image can be plotted def obliq_BY_fig2D(body,incl): phi0=0 nphi_Sun=36 # number of solar positions nphi = 36 # number of spin positions nprec = 10 # number of precession angles nobliq = 12 # number of obliquities dprec = np.pi/nprec dobliq = np.pi/nobliq tau_l_arr = np.zeros((nprec,nobliq)) # to store torques # with imshow x axis will be obliq p_arr = np.zeros(nprec) o_arr = np.zeros(nobliq) for i in range(nprec): phi_prec=i*dprec p_arr[i] = phi_prec*180/np.pi for j in range(nobliq): obliquity = j*dobliq tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec) o_arr[j] = obliquity*180/np.pi tau_l_arr[i,j] = tau_l print(i) return p_arr,o_arr,tau_l_arr # create a sphere of radius 1 center = np.array([0,0,0]) sphere = pymesh.generate_icosphere(1., center, refinement_order=2) sphere.add_attribute("face_area") sphere.add_attribute("face_normal") sphere.add_attribute("face_centroid") print(volume_mesh(sphere)) nf_mesh(sphere) # create a perturbed ellipsoid using the above sphere devrand = 0.025 # perturbation size # fiducial model aratio1 = 0.5 # axis ratios c/a aratio2 = 0.7 # b/a random.seed(1) # fix sequence psphere1 = sphere_perturb(sphere,devrand,1,1) #perturbed sphere body1 = body_stretch(psphere1,aratio1,aratio2) # stretch #print(volume_mesh(body1)) #check volume devrand = 0.05 # perturbation size larger random.seed(1) # same perturbations psphere2 = sphere_perturb(sphere,devrand,1,1) #less perturbed sphere aratio1 = 0.5 # same axis ratios aratio2 = 0.7 body2 = body_stretch(psphere2,aratio1,aratio2) devrand = 0.025 # perturbation size same as fiducial random.seed(20) # different perturbations psphere3 = sphere_perturb(sphere,devrand,1,1) #less perturbed sphere aratio1 = 0.5 # same axis ratios aratio2 = 0.7 body3 = body_stretch(psphere3,aratio1,aratio2) xmax = 1.5 p = plt_mesh_square(body1.vertices,body1.faces,xmax) # p.save('junk.html') # works but no way to snap or get orientation xrot = np.array([1,0,0]) vrot = rotate_vertices(body1.vertices,xrot,np.pi/2) p = plt_mesh_square(vrot,body1.faces,xmax) xmax = 1.5 p = plt_mesh_square(body2.vertices,body1.faces,xmax) # p.save('junk.html') # works but no way to snap or get orientation xrot = np.array([1,0,0]) vrot = rotate_vertices(body2.vertices,xrot,np.pi/2) p = plt_mesh_square(vrot,body2.faces,xmax) aratio1 = 0.5 # new axis ratios aratio2 = 0.8 body1a = body_stretch(psphere1,aratio1,aratio2) aratio1 = 0.6 # new axis ratios aratio2 = 0.7 body1b = body_stretch(psphere1,aratio1,aratio2) # check total surface area print(surface_area(sphere)) print(surface_area(psphere1)) print(surface_area(psphere2)) print(surface_area(psphere3)) print(surface_area(body1)) print(surface_area(body2)) print(surface_area(body3)) print(surface_area(body1a)) print(surface_area(body1b)) # subtract 1 and you have approximately the d_s used by Steinberg+10 # many of their d_s are lower (see their figure 3) # compute BYORPs as a function of obliquity incl = 0; phi_prec=0 o_arr1,tau_l_arr1 = obliq_BY_fig2(body1,incl,phi_prec) o_arr_s,tau_l_arr_s = obliq_BY_fig2(sphere,incl,phi_prec) o_arr2,tau_l_arr2 = obliq_BY_fig2(body2,incl,phi_prec) o_arr3,tau_l_arr3 = obliq_BY_fig2(body3,incl,phi_prec) fig,ax = plt.subplots(1,1,figsize=(6,3),dpi=300) plt.subplots_adjust(bottom=0.19,top=0.98) ax.plot(o_arr_s,tau_l_arr_s,'go:',alpha=0.5,label='icosphere') ax.plot(o_arr1,tau_l_arr1,'rs:',alpha=0.5,label=r'$\Delta=0.025, c/a=0.5, b/a=0.7$') ax.plot(o_arr3,-tau_l_arr3,'mP:',alpha=0.5,label=r'$\Delta=0.025, c/a=0.5, b/a=0.7$') ax.plot(o_arr2,tau_l_arr2,'bd:',alpha=0.5,label=r'$\Delta=0.050, c/a=0.5, b/a=0.7$') #ax.plot(o_arr3,tau_l_arr3,'cv:',alpha=0.5,label=r'$\Delta=0.025, c/a=0.5, b/a=0.8$') #ax.plot(o_arr4,tau_l_arr4,'mP:',alpha=0.5,label=r'$\Delta=0.025, c/a=0.6, b/a=0.7$') ax.set_xlabel('obliquity (deg)',fontsize=16) ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16) ax.legend(borderpad=0.1,labelspacing=0.1,handletextpad=0.1) plt.savefig('tau_BY_obl.png') print(o_arr3[0:5]) print(-tau_l_arr3[0:5]) o_arr1a,tau_l_arr1a = obliq_BY_fig2(body1a,incl,phi_prec) o_arr1b,tau_l_arr1b = obliq_BY_fig2(body1b,incl,phi_prec) fig,ax = plt.subplots(1,1,figsize=(6,3),dpi=300) plt.subplots_adjust(bottom=0.19,top=0.98) #ax.plot(o_arr_s,tau_l_arr_s,'go:',alpha=0.5,label='sphere') ax.plot(o_arr1 ,tau_l_arr1, 'rs:',alpha=0.5,label=r'$\Delta=0.025, c/a=0.5, b/a=0.7$') ax.plot(o_arr1a,tau_l_arr1a,'bd:',alpha=0.5,label=r'$\Delta=0.025, c/a=0.5, b/a=0.8$') ax.plot(o_arr1b,tau_l_arr1b,'cv:',alpha=0.5,label=r'$\Delta=0.025, c/a=0.6, b/a=0.7$') ax.set_xlabel('obliquity (deg)',fontsize=16) ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16) ax.legend(borderpad=0.1,labelspacing=0.1,handletextpad=0.1) plt.savefig('tau_BY_obl2.png') # compute YORPs as a function of obliquity (single body, obliquity w.r.t Solar orbit) o_arr, tau_s_arr, tau_o_arr = obliq_Y_fig(body1) # also check the sphere for YORP o_arr2, tau_s_arr2,tau_o_arr2 = obliq_Y_fig(sphere) # note y axis # compare the two YORPs fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150) ax.plot(o_arr2,tau_s_arr2,'go-',label='sphere') #ax.plot(o_arr2,tau_o_arr2,'bo-',label='sphere') ax.plot(o_arr,tau_s_arr,'rD-',label=r'body, $s$') ax.plot(o_arr,tau_o_arr,'D:',label='body, $o$', color='orange') ax.set_xlabel('obliquity (deg)',fontsize=16) ax.set_ylabel(r'${ \tau}_Y \cdot \hat{ s}, { \tau}_Y \cdot \hat{\phi}$',fontsize=16) ax.legend() # the sizes here agree with right hand side of Figure 3 by Steinberg&Sari+11 # compute BYORPs as a function of inclination obliquity = 0; phi_prec=0 i_arr,tau_l_arr = obliq_BY_fig(body1,obliquity,phi_prec) i_arr2,tau_l_arr2 = obliq_BY_fig(sphere,obliquity,phi_prec) fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150) ax.plot(i_arr2,tau_l_arr2,'go-',label='sphere') ax.plot(i_arr,tau_l_arr,'rD-',label='body') ax.set_xlabel('inclination (deg)',fontsize=16) ax.set_ylabel(r'${\tau}_{BY} \cdot \hat{l}$',fontsize=16) ax.legend() # compute BYORPs as a function of libration angle incl = 0; phi_prec=0; obliquity = np.pi/4 l_arr,tau_l_arr=obliq_BY_fig4(body,obliquity,incl,phi_prec) fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150) #ax.plot(o_arr2,tau_l_arr2,'go-',label='sphere') ax.plot(l_arr,tau_l_arr,'rD-',label='body') ax.set_xlabel('libration angle (deg)',fontsize=16) ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16) ax.legend() #plt.savefig('tau_BY_lib.png') # fairly sensitive to libration angle ###Output _____no_output_____ ###Markdown what next?Normalize in terms of what Steinberg+11 used and compare drift ratesto the size of BYORPs estimated by other people. Done.We need to figure out how the dimensionless constants used by Jacobson and Scheeres+11 compare with thoseby Steinberg+11Steinberg shows in their figure 4 that they expect similar sizes for the two dimensionless parameters.Our computations are not consistent with that as we get larger BYORP (by a factor of about 100) than YORP coefficients. However, our body is not round, compared to most of theirs.We need to get a sphape model for something that has a YORP or BYORP prediction and check our code with it.Try Moshup secondary called Squannit shape modelWe need to explore sensitivity of our BYORP with obliquity with the shape model. ###Code # we seem to find that moderate obliquity variations can reverse BYORP, particularly for a non-round secondary. # And this is for fixed obliquity, not chaotic ones. # We might be able to somewhat mitigate the tension between dissipation rate estimates # and we would predict obliquity in Didymos! yay! #https://www.naic.edu/~smarshal/1999kw4.html #Squannit is secondary of Moshup which was 1999KW4 squannit = pymesh.load_mesh("kw4b.obj") nf_mesh(squannit) # we need to normalize it so that its volume is 4/3 pi # to compute the volume of a tetrahdon that is made from a face + a vertex at the origin # we need to compute the determanent of a 3x3 matrix that consists of the 3 vertices in the face. # we then sum over all faces in the mesh to get the total volume # alternatively we use the generalized voxel thing in pymesh which is 4 vertices. # to do this we add a vertex at the center and then we need to make the same number # of voxels as faces using the vertex at the origin. # and then we sum over the voxel_volume attributes of all the voxels. xmax = 1.5 p = plt_mesh_square(squannit.vertices,squannit.faces,xmax) vol = volume_mesh(squannit) print(vol) R_squannit = pow(vol*3/(4.0*np.pi),0.3333333) print(R_squannit) # I don't know what units this is in. maybe km # the object is supposed to have Beta: 0.571 x 0. 463 x 0.349 km (6% uncertainty) # rescale so it has vol equiv sphere radius of 1 new_squannit = cor_volume(squannit) p = plt_mesh_square(new_squannit.vertices,new_squannit.faces,xmax) xrot = np.array([1,0,0]) vrot = rotate_vertices(new_squannit.vertices,xrot,np.pi/2) p = plt_mesh_square(vrot,new_squannit.faces,xmax) # reduce the number of faces to something reasonable short_squannit1, info = pymesh.collapse_short_edges(new_squannit, 0.219) # if bigger then fewer faces nf_mesh(short_squannit1) meshplot.plot(short_squannit1.vertices, short_squannit1.faces) short_squannit2, info = pymesh.collapse_short_edges(new_squannit, 0.17) # if bigger then fewer faces nf_mesh(short_squannit2) meshplot.plot(short_squannit2.vertices, short_squannit2.faces) p = plt_mesh_square(short_squannit2.vertices,short_squannit2.faces,xmax) xrot = np.array([1,0,0]) vrot = rotate_vertices(short_squannit2.vertices,xrot,np.pi/2) p = plt_mesh_square(vrot,short_squannit2.faces,xmax) # compute BYORPs as a function of obliquity incl = 0; phi_prec=0 o_arr1,tau_l_arr1 = obliq_BY_fig2(short_squannit1,incl,phi_prec) # compute BYORPs as a function of obliquity incl = 0; phi_prec=0 o_arr2,tau_l_arr2 = obliq_BY_fig2(short_squannit2,incl,phi_prec) fig,ax = plt.subplots(1,1,figsize=(6,3),dpi=150) plt.subplots_adjust(bottom=0.19, top = 0.98,left=0.13) #ax.plot(o_arr2,tau_l_arr2,'go-',label='sphere') ax.plot(o_arr1,tau_l_arr1,'rD-',label='squannit 302') ax.plot(o_arr2,tau_l_arr2,'bv-',label='squannit 534') ax.set_xlabel('obliquity (deg)',fontsize=16) ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16) ax.legend() plt.savefig('squannit.png') from multiprocessing import Pool #It's a great idea to use Pool but everything now needs to be global class ob_info(): def __init__(self,body): self.phi0 = 0 self.nphi_Sun = 36 self.nphi=36 self.nobliq = 60 self.incl = 0 self.phi_prec = 0 self.o_arr = np.zeros(self.nobliq) self.dobliq = np.pi/self.nobliq self.body = body for i in range(self.nobliq): obliquity=i*self.dobliq self.o_arr[i] = obliquity*180/np.pi ob_stuff = ob_info(body1) tau_l_arr = np.ctypeslib.as_ctypes(np.zeros((60))) shared_array_tau = sharedctypes.RawArray(tau_l_arr._type_, tau_l_arr) def f(i): body = ob_stuff.body obliquity = i*ob_stuff.dobliq nphi = ob_stuff.nphi nphi_Sun = ob_stuff.nphi_Sun incl = ob_stuff.incl phi0 = obstuff.phi0 phi_prec = obstuff.phi_prec tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec) tau_l_arr[i] = tau_l p = Pool() pool.map(f, range(obstuff.nobliq)) # compute the BYORP torque on body as a function of obliquity # for a given inclination and precession angle # returns obliquity and torque arrays def call_obliq_BY_fig2(body,incl,phi_prec): phi0=0 nphi_Sun=36 # number of solar positions nphi = 36 # number of spin positions nobliq = 60 # number of obliquities dobliq = np.pi/nobliq #tau_l_arr = np.zeros(nobliq) # to store torques o_arr = np.zeros(nobliq) tau_l_arr = np.ctypeslib.as_ctypes(np.zeros((60))) shared_array_tau = sharedctypes.RawArray(tau_l_arr._type_, tau_l_arr) for i in range(nobliq): obliquity=i*dobliq o_arr[i] = obliquity*180/np.pi for i in range(nobliq): obliquity=i*dobliq tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec) tau_l_arr[i] = tau_l return o_arr,tau_l_arr # compute BYORPs as a function of precession angle, seems not sensitive to precession angle incl = 0; #phi_prec=0 obliquity=np.pi/4 p_arr,tau_l_arr = obliq_BY_fig3(body,obliquity,incl) p_arr2,tau_l_arr2 = obliq_BY_fig3(sphere,obliquity,incl) fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150) ax.plot(p_arr2,tau_l_arr2,'go-',label='sphere') ax.plot(p_arr,tau_l_arr,'rD-',label='body') ax.set_xlabel('precession angle (deg)',fontsize=16) ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16) ax.legend() # I don't understand why this has period 6 # It is very sensitive to obliquity but not to precession angle incl = 0 # this takes a really long time! p_arr,o_arr,tau_l_arr_2D=obliq_BY_fig2D(body,incl) fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150) ax.set_ylabel('precession angle (deg)',fontsize=16) ax.set_xlabel('obliquity (deg)',fontsize=16) maxt = np.max(tau_l_arr_2D) mint = np.min(tau_l_arr_2D) maxabs = max(abs(maxt),abs(mint)) im=ax.imshow(tau_l_arr_2D, cmap='RdBu',vmin=-maxabs, vmax=maxabs, extent=[np.min(o_arr), np.max(o_arr), np.min(p_arr), np.max(p_arr)], origin='lower') plt.colorbar(im) ###Output _____no_output_____
Notebooks/wfml-ejercicio-01-entrada-salida-prof.ipynb
###Markdown Ejercicio 01. Entrada - SalidaQué hay que hacer con las salidas (generado) y (seleccionado)? ![image.png](attachment:eb7a70fe-f1f2-486c-b161-4b2013c0f880.png) ![image.png](attachment:781393f7-5584-47d0-90bd-888d487d0b39.png) ###Code # abrimos desde la dirección de github namefile = "https://raw.githubusercontent.com/jovenluk/WFML/master/Datasets/bank_customer_data.txt" df = pd.read_csv(namefile) df.head() ###Output _____no_output_____ ###Markdown Cargamos datos de empleados ![image.png](attachment:3b2ec634-ae54-48de-bf9c-29479b260ccd.png) ###Code # abrimos desde la dirección de github namefile = "https://raw.githubusercontent.com/jovenluk/WFML/master/Datasets/employee_data.csv" df = pd.read_csv(namefile) df.head() ###Output _____no_output_____ ###Markdown Cargamos datos de clientes de Banca (EXCEL)Hace falta instalar una librería en Kaggle para que pandas entienda un XLSX ![image.png](attachment:2db08b26-6793-42e7-9eec-47516fc03baf.png) ###Code ! pip install openpyxl # abrimos desde la dirección de github namefile = "https://raw.githubusercontent.com/jovenluk/WFML/master/Datasets/bank_customer_data.xlsx" df = pd.read_excel(namefile, engine='openpyxl') df.head() ###Output _____no_output_____