Differences

This shows you the differences between two versions of the page.

Link to this comparison view

ewis:laboratoare:08 [2020/05/03 10:47]
alexandru.predescu
ewis:laboratoare:08 [2023/04/26 17:55] (current)
alexandru.predescu [Exercises]
Line 2: Line 2:
  
 The purpose of an **information system** is to extract useful information from raw data. **Data science** is a field of study that aims to understand and analyze data by means of **statistics,​ big data, machine learning** and to provide support for decision makers and autonomous systems. While this sounds complicated,​ the tools are based on mathematical models and specialized software components that are already available (e.g. Python packages). In the following labs we will learn about.. learning. Machine Learning, to be more specific, and the two main classes: **Supervised Learning** and **Unsupervised Learning**. The general idea is to write software programs that can learn from the available data, identify patterns and make decisions with minimal human interventions,​ based on Machine Learning algorithms. The purpose of an **information system** is to extract useful information from raw data. **Data science** is a field of study that aims to understand and analyze data by means of **statistics,​ big data, machine learning** and to provide support for decision makers and autonomous systems. While this sounds complicated,​ the tools are based on mathematical models and specialized software components that are already available (e.g. Python packages). In the following labs we will learn about.. learning. Machine Learning, to be more specific, and the two main classes: **Supervised Learning** and **Unsupervised Learning**. The general idea is to write software programs that can learn from the available data, identify patterns and make decisions with minimal human interventions,​ based on Machine Learning algorithms.
- 
-<note tip> 
-An introduction into Machine Learning territory can be already found in the previous lab, as linear regression is a form of Supervised Learning, extracting patterns from the data based on a linear model. 
-</​note>​ 
  
 ==== Machine Learning. Supervised Learning ==== ==== Machine Learning. Supervised Learning ====
Line 65: Line 61:
 p = [inv_map[e] for e in p] p = [inv_map[e] for e in p]
 print(p) print(p)
 +</​code>​
 +
 +See the next example on how you can plot the decision tree:
 +
 +<code python>
 +# continue from previous example
 +
 +# plot decision tree
 +from matplotlib import pyplot as plt
 +
 +tree.plot_tree(classifier)
 +plt.show()
 +</​code>​
 +
 +== Random forests (overview) ==
 +
 +  * A "​forest"​ of decision trees
 +  * Decision trees are susceptible to overfitting. ​
 +  * One solution is to construct several trees and let them “vote” on the final classification.
 +  * We do this by randomly re-sampling the input data for each tree (fancy term: bootstrap aggregating).
 +
 +In Python (scikit-learn),​ we can just use the //​RandomForestClassifier//​ instead of the //​DecisionTreeClassifier//​. There are some parameters that have to be defined such as the number of trees (//​n_estimators//​) and the random state (controls the randomness of the samples when building trees, set to 0 to disable)
 +
 +<code python>
 +from sklearn.ensemble import RandomForestClassifier
 +model = RandomForestClassifier(n_estimators=10,​ random_state=0)
 </​code>​ </​code>​
  
Line 86: Line 108:
  
 {{ :​ewis:​laboratoare:​lab8:​dtree_edit.png?​800 |}} {{ :​ewis:​laboratoare:​lab8:​dtree_edit.png?​800 |}}
 +
 +<note tip>The greedy algorithm ID3 walks down the tree and (at each step) picks the attribute to partition the data set that minimizes the entropy of the data at the next step. The Gini Index or Impurity (between 0 and 1) measures the probability for a random instance being misclassified when chosen randomly.</​note>​
  
 Interpreting the data is straight-forward. At each "​decision"​ (internal node) there are two branches: left (false), right (true) which represent the possible outcomes for the current test attribute (e.g. Interned). The leaf nodes are reached when all the samples are aligned to either outcome and hold the class labels (e.g. Hired/Not Hired) and shown with a **different color** for each class (in this case there are 2 classes: Hired/Not Hired). From the example, the decision for hiring a new candidate can be described as follows: ​ Interpreting the data is straight-forward. At each "​decision"​ (internal node) there are two branches: left (false), right (true) which represent the possible outcomes for the current test attribute (e.g. Interned). The leaf nodes are reached when all the samples are aligned to either outcome and hold the class labels (e.g. Hired/Not Hired) and shown with a **different color** for each class (in this case there are 2 classes: Hired/Not Hired). From the example, the decision for hiring a new candidate can be described as follows: ​
Line 104: Line 128:
 <code python> <code python>
 import pandas as pd import pandas as pd
- +  
-input_file = "​./​past_hires.csv"​+input_file = "./data/​past_hires.csv"​
 df = pd.read_csv(input_file,​ header=0) df = pd.read_csv(input_file,​ header=0)
 + 
 # format the data, map classes to numbers # format the data, map classes to numbers
 d = {'​Y':​ 1, '​N':​ 0} d = {'​Y':​ 1, '​N':​ 0}
Line 116: Line 140:
 d = {'​BS':​ 0, '​MS':​ 1, '​PhD':​ 2} d = {'​BS':​ 0, '​MS':​ 1, '​PhD':​ 2}
 df['​Level of Education'​] = df['​Level of Education'​].map(d) df['​Level of Education'​] = df['​Level of Education'​].map(d)
 + 
 target = df['​Hired'​] target = df['​Hired'​]
  
 +print(target)
 </​code>​ </​code>​
  
 == The algorithm. DecisionTreeClassifier == == The algorithm. DecisionTreeClassifier ==
- 
-The greedy algorithm ID3 walks down the tree and (at each step) picks the attribute to partition the data set that minimizes the entropy of the data at the next step. 
  
 In Python we use the //​DecisionTreeClassifier//​ from the //​scikit-learn//​ package, that creates the tree for us. We train the model using the data set and then we can visualize the decisions. Then we can validate the model by comparing the target values to the predicted values, showing the prediction accuracy. In Python we use the //​DecisionTreeClassifier//​ from the //​scikit-learn//​ package, that creates the tree for us. We train the model using the data set and then we can visualize the decisions. Then we can validate the model by comparing the target values to the predicted values, showing the prediction accuracy.
Line 132: Line 155:
 from sklearn import tree from sklearn import tree
 import numpy as np import numpy as np
 +
 +# load the data (see previous example)
  
 # print features and data # print features and data
Line 156: Line 181:
 Download the {{:​ewis:​laboratoare:​lab8:​lab8.zip|Project Archive}} and install the required packages via //​requirements.txt//​ Download the {{:​ewis:​laboratoare:​lab8:​lab8.zip|Project Archive}} and install the required packages via //​requirements.txt//​
  
-<note important>​**Run //​gen_ucode.py//​ to generate your unique code (//UCODE//) that you will use in the exercises when required. Write it down and include it in the pdf report.** </​note>​ +=== Task 1 (1p) ===
- +
-=== Task 1 (2p) ===+
  
 Run //​task1.py//:​ Run //​task1.py//:​
Line 165: Line 188:
   * The predictions are evaluated to find out the accuracy of the model and the decision tree is then shown as (pseudo)code (if else statements) and graph representation as //​dtree1.png//​.   * The predictions are evaluated to find out the accuracy of the model and the decision tree is then shown as (pseudo)code (if else statements) and graph representation as //​dtree1.png//​.
  
-<note important>​Use //​n_train_percent//​ to change ​the amount of data used for training the model and evaluate ​the results. Set //​n_train_percent//​ as the generated //code// **(//​UCODE//​)** and report ​the results:  +Change ​the amount of data used for training the model and evaluate the results: 
-  * prediction accuracy and generated output ​//​dtree1.png//​ +  * prediction accuracy and generated output 
-  * how large is the decision tree regarding the number of leaf nodes +  * how large is the decision tree regarding the number of leaf nodes?
-</​note>​ +
  
 === Task 2 (2p) === === Task 2 (2p) ===
Line 179: Line 200:
   * The results are plotted on a chart, showing the effect of the amount (percent) of training data on the prediction accuracy   * The results are plotted on a chart, showing the effect of the amount (percent) of training data on the prediction accuracy
  
-<note important>​Write down your observations regarding ​the observed ​results ​in this case and include them into your report. ​How much training data (percent) is required in this case to obtain most accurate predictions?​</​note>​+Evaluate ​the results
 +  * How much training data (percent) is required in this case to obtain ​the most accurate predictions?​
  
 === Task 3 (3p) === === Task 3 (3p) ===
  
-Run //task31.py//:+Run //task3.py//:
  
-  * //task31.py// is similar to //​task1.py//,​ using another dataset about wine quality: //winequality-white.csv//, //winequality-red.csv// to train a decision tree that should predict the quality of the wine based on measured properties.+  * //task3.py// is similar to //​task1.py//,​ using another dataset about wine quality: //winequality_white.csv//, //winequality_red.csv// to train a decision tree that should predict the quality of the wine based on measured properties.
   * A brief description of the dataset:   * A brief description of the dataset:
  
Line 206: Line 228:
 </​code>​ </​code>​
  
-<note important>​Use //​n_train_percent//​ to change the amount of data used for training the model and evaluate ​the results. Set //​n_train_percent//​ as the generated //code// **(//​UCODE//​)** and report ​the results:  +Use //​n_train_percent//​ to change the amount of data used for training the model and evaluate the results:  
-  * prediction accuracy and generated output ​//​dtree31.png//​ +  * prediction accuracy and generated output 
-  * how large is the decision tree regarding the number of leaf nodes +  * how large is the decision tree regarding the number of leaf nodes?
-</​note>​+
    
-=== Task 4 (3p + 2p bonus) ===+=== Task 4 (4p) ===
  
-Run //task31_sol.py//:+Create ​//task4.py//:
  
-  * //task31_sol.py// is similar to //task2.py// and evaluates ​the accuracy on the wine quality dataset using both decision trees and random forest models. The accuracy of the two models is compared on the plot for different amounts of training data, specified by //​n_train_percent//​. +  * //task4.py// is similar to //task4.py// and should evaluate ​the accuracy on the wine quality dataset using both decision trees and random forest models. The accuracy of the two models is compared on the plot for different amounts of training data, specified by //​n_train_percent//​. 
-  * Run //task31_sol.py// for both red (//winequality-red.csv//) and white (//winequality-white.csv//) wine datasets +  * Run //task4.py// for both red (//winequality_red.csv//) and white (//winequality_white.csv//) wine datasets
- +
-<note important>​Write down your observations regarding the observed results in this case and include them into your report: +
-  * How much training data (percent) is required in this case to obtain most accurate predictions +
-  * What is the average accuracy for each model (decision tree, random forest) **(+1p)** +
-  * Which type of wine (red/white) is easier to predict (more accurate) based on the results **(+1p)** +
-</​note>​+
  
 +Evaluate the results:
 +  * How much training data (percent) is required in this case to obtain the most accurate predictions?​
 +  * What is the average accuracy for each model (decision tree, random forest)
  
 +/*
 === Bonus (4p + 2p) === === Bonus (4p + 2p) ===
  
Line 248: Line 267:
 Use //​n_train_percent//​ to change the amount of data used for training the model and evaluate the results. Set //​n_train_percent//​ as the generated //code// **(//​UCODE//​)** and report the results: ​ Use //​n_train_percent//​ to change the amount of data used for training the model and evaluate the results. Set //​n_train_percent//​ as the generated //code// **(//​UCODE//​)** and report the results: ​
   * prediction accuracy and generated output //​dtree32.png//​   * prediction accuracy and generated output //​dtree32.png//​
-  * how large is the decision tree regarding the number of leaf nodes+  * how large is the decision tree regarding the number of leaf nodes?
  
 Create a new script similar to //​task31_sol.py//​ to compare the decision trees with random forest models using variable amounts (percent) of training data: Create a new script similar to //​task31_sol.py//​ to compare the decision trees with random forest models using variable amounts (percent) of training data:
-  * How much training data (percent) is required in this case to obtain most accurate predictions+  * How much training data (percent) is required in this case to obtain most accurate predictions?
   * What is the average accuracy for each model (decision tree, random forest)   * What is the average accuracy for each model (decision tree, random forest)
   * Explain the low accuracy obtained for this case study. What would be required to improve the results? **(+2p)**   * Explain the low accuracy obtained for this case study. What would be required to improve the results? **(+2p)**
 </​note>​ </​note>​
 +*/
  
 ==== Resources ==== ==== Resources ====
Line 264: Line 284:
 [[https://​archive.ics.uci.edu/​ml/​datasets/​wine+quality]] [[https://​archive.ics.uci.edu/​ml/​datasets/​wine+quality]]
  
-[[http://​wiki.stat.ucla.edu/​socr/​index.php/​SOCR_Data_MLB_HeightsWeights]]+/*[[http://​wiki.stat.ucla.edu/​socr/​index.php/​SOCR_Data_MLB_HeightsWeights]]*/
  
ewis/laboratoare/08.txt · Last modified: 2023/04/26 17:55 by alexandru.predescu
CC Attribution-Share Alike 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0