Differences

This shows you the differences between two versions of the page.

Link to this comparison view

ewis:laboratoare:08 [2022/05/04 21:42]
alexandru.predescu [Exercises]
ewis:laboratoare:08 [2023/04/26 17:55] (current)
alexandru.predescu [Exercises]
Line 69: Line 69:
  
 # plot decision tree # plot decision tree
-from dtreeplt ​import ​dtreeplt +from matplotlib ​import ​pyplot as plt 
-dtree = dtreeplt( + 
-    ​model=classifier,​ +tree.plot_tree(classifier
-    feature_names=features,​ +plt.show()
-    target_names=labels +
-)    +
-fig = dtree.view(interactive=False+
-fig.savefig("​dtree.png"​)+
 </​code>​ </​code>​
  
Line 112: Line 108:
  
 {{ :​ewis:​laboratoare:​lab8:​dtree_edit.png?​800 |}} {{ :​ewis:​laboratoare:​lab8:​dtree_edit.png?​800 |}}
 +
 +<note tip>The greedy algorithm ID3 walks down the tree and (at each step) picks the attribute to partition the data set that minimizes the entropy of the data at the next step. The Gini Index or Impurity (between 0 and 1) measures the probability for a random instance being misclassified when chosen randomly.</​note>​
  
 Interpreting the data is straight-forward. At each "​decision"​ (internal node) there are two branches: left (false), right (true) which represent the possible outcomes for the current test attribute (e.g. Interned). The leaf nodes are reached when all the samples are aligned to either outcome and hold the class labels (e.g. Hired/Not Hired) and shown with a **different color** for each class (in this case there are 2 classes: Hired/Not Hired). From the example, the decision for hiring a new candidate can be described as follows: ​ Interpreting the data is straight-forward. At each "​decision"​ (internal node) there are two branches: left (false), right (true) which represent the possible outcomes for the current test attribute (e.g. Interned). The leaf nodes are reached when all the samples are aligned to either outcome and hold the class labels (e.g. Hired/Not Hired) and shown with a **different color** for each class (in this case there are 2 classes: Hired/Not Hired). From the example, the decision for hiring a new candidate can be described as follows: ​
Line 149: Line 147:
  
 == The algorithm. DecisionTreeClassifier == == The algorithm. DecisionTreeClassifier ==
- 
-The greedy algorithm ID3 walks down the tree and (at each step) picks the attribute to partition the data set that minimizes the entropy of the data at the next step. 
  
 In Python we use the //​DecisionTreeClassifier//​ from the //​scikit-learn//​ package, that creates the tree for us. We train the model using the data set and then we can visualize the decisions. Then we can validate the model by comparing the target values to the predicted values, showing the prediction accuracy. In Python we use the //​DecisionTreeClassifier//​ from the //​scikit-learn//​ package, that creates the tree for us. We train the model using the data set and then we can visualize the decisions. Then we can validate the model by comparing the target values to the predicted values, showing the prediction accuracy.
Line 185: Line 181:
 Download the {{:​ewis:​laboratoare:​lab8:​lab8.zip|Project Archive}} and install the required packages via //​requirements.txt//​ Download the {{:​ewis:​laboratoare:​lab8:​lab8.zip|Project Archive}} and install the required packages via //​requirements.txt//​
  
-<note important>​**Get your unique code (//UCODE//) via moodle.** </​note>​ +=== Task 1 (1p) ===
- +
-=== Task 1 (2p) ===+
  
 Run //​task1.py//:​ Run //​task1.py//:​
Line 194: Line 188:
   * The predictions are evaluated to find out the accuracy of the model and the decision tree is then shown as (pseudo)code (if else statements) and graph representation as //​dtree1.png//​.   * The predictions are evaluated to find out the accuracy of the model and the decision tree is then shown as (pseudo)code (if else statements) and graph representation as //​dtree1.png//​.
  
-<note important>​Use //​n_train_percent//​ to change ​the amount of data used for training the model and evaluate ​the results. Set //​n_train_percent=UCODE//​ and report ​the results:  +Change ​the amount of data used for training the model and evaluate the results: 
-  * prediction accuracy and generated output ​//​dtree1.png//​ +  * prediction accuracy and generated output 
-  * how large is the decision tree regarding the number of leaf nodes +  * how large is the decision tree regarding the number of leaf nodes?
-</​note>​ +
  
 === Task 2 (2p) === === Task 2 (2p) ===
Line 208: Line 200:
   * The results are plotted on a chart, showing the effect of the amount (percent) of training data on the prediction accuracy   * The results are plotted on a chart, showing the effect of the amount (percent) of training data on the prediction accuracy
  
-<note important>​Write down your observations regarding ​the results: +Evaluate ​the results: 
-  * How much training data (percent) is required in this case to obtain most accurate predictions?​ +  * How much training data (percent) is required in this case to obtain ​the most accurate predictions?​
-</​note>​+
  
 === Task 3 (3p) === === Task 3 (3p) ===
  
-Run //task31.py//:+Run //task3.py//:
  
-  * //task31.py// is similar to //​task1.py//,​ using another dataset about wine quality: //winequality-white.csv//, //winequality-red.csv// to train a decision tree that should predict the quality of the wine based on measured properties.+  * //task3.py// is similar to //​task1.py//,​ using another dataset about wine quality: //winequality_white.csv//, //winequality_red.csv// to train a decision tree that should predict the quality of the wine based on measured properties.
   * A brief description of the dataset:   * A brief description of the dataset:
  
Line 237: Line 228:
 </​code>​ </​code>​
  
-<note important>​Use //​n_train_percent//​ to change the amount of data used for training the model and evaluate ​the results. Set //​n_train_percent=UCODE//​ and report ​the results:  +Use //​n_train_percent//​ to change the amount of data used for training the model and evaluate the results:  
-  * prediction accuracy and generated output ​//​dtree31.png//​ +  * prediction accuracy and generated output 
-  * how large is the decision tree regarding the number of leaf nodes +  * how large is the decision tree regarding the number of leaf nodes?
-</​note>​+
    
-=== Task 4 (3p + 2p bonus) ===+=== Task 4 (4p) ===
  
-Run //task31_sol.py//:+Create ​//task4.py//:
  
-  * //task31_sol.py// is similar to //task2.py// and evaluates ​the accuracy on the wine quality dataset using both decision trees and random forest models. The accuracy of the two models is compared on the plot for different amounts of training data, specified by //​n_train_percent//​. +  * //task4.py// is similar to //task4.py// and should evaluate ​the accuracy on the wine quality dataset using both decision trees and random forest models. The accuracy of the two models is compared on the plot for different amounts of training data, specified by //​n_train_percent//​. 
-  * Run //task31_sol.py// for both red (//winequality-red.csv//) and white (//winequality-white.csv//) wine datasets+  * Run //task4.py// for both red (//winequality_red.csv//) and white (//winequality_white.csv//) wine datasets
  
-<note important>​Write down your observations regarding ​the results: +Evaluate ​the results: 
-  * How much training data (percent) is required in this case to obtain most accurate predictions +  * How much training data (percent) is required in this case to obtain ​the most accurate predictions? 
-  * What is the average accuracy for each model (decision tree, random forest) ​**(+1p)** +  * What is the average accuracy for each model (decision tree, random forest)
-  * Which type of wine (red/white) is easier to predict (more accurate) based on the results **(+1p)** +
-</​note>​+
  
 /* /*
Line 279: Line 267:
 Use //​n_train_percent//​ to change the amount of data used for training the model and evaluate the results. Set //​n_train_percent//​ as the generated //code// **(//​UCODE//​)** and report the results: ​ Use //​n_train_percent//​ to change the amount of data used for training the model and evaluate the results. Set //​n_train_percent//​ as the generated //code// **(//​UCODE//​)** and report the results: ​
   * prediction accuracy and generated output //​dtree32.png//​   * prediction accuracy and generated output //​dtree32.png//​
-  * how large is the decision tree regarding the number of leaf nodes+  * how large is the decision tree regarding the number of leaf nodes?
  
 Create a new script similar to //​task31_sol.py//​ to compare the decision trees with random forest models using variable amounts (percent) of training data: Create a new script similar to //​task31_sol.py//​ to compare the decision trees with random forest models using variable amounts (percent) of training data:
ewis/laboratoare/08.1651689777.txt.gz · Last modified: 2022/05/04 21:42 by alexandru.predescu
CC Attribution-Share Alike 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0