The decision tree is an important algorithm for predictive modelling and can be used to visually and explicitly represent decisions. It is a graphical representation that makes use of branching methodology to exemplify all possible outcomes based on certain conditions. In decision tree internal node represents a test on the attribute, branch depicts the outcome and leaf represents decision made after computing attribute.
It can be classified into two types, Classification trees which are used to separate a dataset into different classes based on the particular basis and generally used when we expect response variable in categorical nature. The other type is called Regression Trees which are used when the response variable is continuous or numerical.
Decision Tree helps in making decisions under a particular circumstance and improves communication. It helps data scientist to capture the idea that how different decisions can lead to different operational nature of the situation. It helps to take an optimal decision. The algorithm is well suited for problems where instances are represented by attribute value and when training data contains error. It is also applicable to the situation when the target function has discrete output value.
It has various advantages, it implicitly performs variable screening and requires relatively lesser effort from the user for data preparation. Non-linear relations do not affect tree performance and are easy to understand. The decision tree is useful in data exploration and does not make any assumptions on the linearity of data but as the number of trees increases the accuracy level reduces. Another major drawback is that outcome may be based on expectation which could lead to bad decision making.
Decision Trees have great use in financing for option pricing and are used by banks to classify loan applicant by probability of their default payment. It is also used widely in data science libraries in Python and R.
How decision trees algorithm works
Video Source: Thales Sehn Körting