Predictive learning by vladimir cherkassky pdf free download

Predictive learning by vladimir cherkassky pdf free download

predictive learning by vladimir cherkassky pdf free download

SVM-Based Approaches for Predictive Modeling of Survival Data Han-Tai Shiao and Vladimir Cherkassky SHARE; HTML; DOWNLOAD. Save this The field of machine learning is also targeting the same or similar goals. 5-6, pp, [] M. Zhou, Use software R to do survival analysis and simulation. a tutorial, mai/rsurv.​pdf. Best Free Books Scary Close [PDF, ePub, Docs] by Donald Miller Complete Download eBooks Beautiful Disaster [PDF, Kindle] by Jamie McGuire Read Read Books Predictive Learning [PDF, Mobi] by Vladimir S. Cherkassky Online Full. Vladimir Cherkassky, who guided me through my graduate study and research work. methods for predictive data-analytic modeling of biomedical data. t (or being event-free at t), Suppose the first derivative of F(t) exists, then the PDF of T can be found by f(t) = transfer,” in Statistical Learning and Data Sciences, ser. predictive learning by vladimir cherkassky pdf free download

Will refrain: Predictive learning by vladimir cherkassky pdf free download

Predictive learning by vladimir cherkassky pdf free download Google music manager download for pc windows 7
Predictive learning by vladimir cherkassky pdf free download Charlie brown halloween video download free
Predictive learning by vladimir cherkassky pdf free download Chicago fire latino mp4 download
Predictive learning by vladimir cherkassky pdf free download Nextar mp3 player driver download windows 10
Predictive learning by vladimir cherkassky pdf free download How to download rome total war with torrent

SVM-Based Approaches for Predictive Modeling of Survival Data

Transcription

1 SVM-Based Approaches for Predictive Modeling of Survival Data Han-Tai Shiao and Vladimir Cherkassky Department of Electrical and Computer Engineering University of Minnesota, Twin Cities Minneapolis, Minnesota 55455, U.S.A. {shiao003, Abstract Survival data is common in medical applications. The challenge in applying predictive data-analytic methods to survival data is in the treatment of censored observations. The survival times for these observations are unknown. This paper presents formalization of the analysis of survival data as a binary classification problem. For this binary classification setting, we propose two different strategies for encoding censored data, leading to two advanced SVM-based formulations: SVM+ and SVM with uncertain class labels. Further, we present empirical comparison of the advanced SVM methods and the classical Cox modeling approach for predictive modeling of survival data. These comparisons suggest that the proposed SVM-based models consistently yield better predictive performance (than classical statistical modeling) for real-life survival data sets. Index Terms classification, survival analysis, Support Vector Machine (SVM), SVM+, Learning Using Privileged Information (LUPI), SVM with uncertain labels, Cox model. I. INTRODUCTION A significant proportion of medical data is a collection of time-to-event observations. Methods for survival analysis developed in classical statistics have been used to model such data. Survival analysis focuses on the time elapsed from an initiating event to an event, or endpoint, of interest []. Classical examples are the time from birth to death, from disease onset to death, and from entry to a study to relapse, etc. All these times are generally known as the survival time, even when the endpoint is something different from death. This statistical methodology can also be used in many different settings, such as the reliability engineering, and financial insurance. Even though the purpose of a statistical analysis may vary from one situation to another, the ambitious aim of most statistical analyses is to build a model that relates explanatory variables and the occurrences of the event. The field of machine learning is also targeting the same or similar goals. Learning is the process of estimating an unknown dependency between system s inputs and its output, based on a limited number of observations [2]. However, the machine learning techniques have not been widely used for survival analysis for two major reasons. First, the survival time is not necessarily observed in all samples. For example, patients might not experience the occurrence of event (death or relapse) during the study, or they were lost to follow-up. Hence, the survival time is incomplete and only known up-to-a-point, which is quite different from the traditional notion of missing data. The second reason is methodological. Machine learning techniques are usually developed and applied under predictive setting, where the main goal is the prediction accuracy for future (or test) samples. In contrast, classical statistical methods aim at estimating the true probabilistic model of available data. So the prediction accuracy is just one of several performance indices. The methodological assumption is that if an estimated model is correct, then it should yield good predictions. So the classical statistical methodology often does not clearly differentiate between training (model estimation) and prediction (or test) stages. This paper assumes a predictive setting, which is appropriate for many applications. Under this predictive setting, the survival time is known for training data, but it is not available during the prediction (or testing) stage. Thus, modifications are required for applying existing machine learning approaches to survival data analysis. Previously, several studies applied Support Vector Machines (SVM) to survival data [3] [5]. Most of these efforts formalize the problem under the regression setting. Specifically, the SVM regression was used to estimate a model that predicts the survival time. However, formalization using regression setting is intrinsically more difficult than classification. Further, practitioners generally use the modeling outputs as a reference and they are usually concerned with the status of a patient at a given time, such as six-month after surgery or two-year post transplant. In this paper, we propose to use a special classification formulation that addresses the issues of incomplete information in the survival time. Instead of predicting the survival time, we try to estimate a model that predicts a subject s status at a time point of interest. This paper is organized as follows. The characteristics of the survival data are summarized in Section II. The predictive problem setting for survival analysis is introduced in Section III. The proposed SVM-based formulations are introduced in Section IV. Empirical comparisons for several synthetic and real-life data sets are presented in Section V and VI. Finally, the discussion and conclusion are given in Section VII. II. SURVIVAL DATA ANALYSIS This section provides general background description of survival data analysis and its terminology.

2 Subject U 4,U 6 U 2 U 5 δ 6 = δ 5 = 0 δ 4 = 0 δ 3 = 0 δ 2 = δ = 0 Study time (t) Fig.. Example of survival data in a study-time scale. The exact observations are indicated by solid dots, and the censored observations by hollow dots. The survival data (or failure time data) are obtained by observing individuals from a certain initial time to either the occurrence of a predefined event or the end of the study. The predefined event is often the failure of a subject or the relapse of a disease. The major difference between survival data and other types of numerical data is the time to the event occurring is not necessarily observed in all individuals. A common feature of these data sets is they contain censored observations. Censored data arise when an individual s life length is known to occur only in a certain period of time. Possible censoring schemes are right censoring, where all that is known is that the individual is still alive at a given time, left censoring when all that is known is that the individual has experienced the event of interest prior to the start of the study, or interval censoring, where the only information is that the event occurs within some interval. In this paper, we only consider the right censoring scheme. The graphical representation of the survival data for a hypothetical study with six subjects is shown in Figure. In this study, subject 2 and 6 experienced the event of interest prior to the end of the study and they are the exact observations. Subject, 3, and 5, who experienced the event after the end of the study, are only known to be alive at the end of the study. Subject 4 was included in the study for some time but further observation cannot be obtained. The data for subject, 3, 4, and 5 are called censored (right-censored) observations. Thus, for the censored observations, it is known that the survival time is greater than a certain value, but it is not known by how much. Suppose T denotes the event time, such as death or lifetime; C denotes the censoring time, e.g., the end of study or the time an individual withdraws from the study. The T s are assumed to be independent and identically distributed with probability density function ϕ(t) and survival function S(t). For right censoring scheme, we only know T i > C i with observed C i. Then the survival data can be represented by pairs of random variables (U i,δ i ), i =,...,n. The δ i indicates whether the observed survival time U i corresponds to an event (δ i = ) or is censored (δ i = 0). The U i is equal to T i if the lifetime or event is observed, and to C i if it is censored. Mathematically, U i and δ i are defined as U i = min(t i,c i ), () δ i = I(T i C i ) = { 0 censored observation, event occurred. In Figure, subject 4 and 6 have the same observed survival time (U 4 = U 6 ), but their censoring indicators are different (δ 4 = 0,δ 6 = ). Therefore, in the survival analysis, we are given a set of data, (x i,u i,δ i ), i =,...,n, where x i R d, U i R + and δ i {0,}. In contrast, under supervised learning setting, we are given a set of training data, (x i,y i ), i =,...,n, where x i R d and y i R. The target values y i s can be real-valued such as in standard regression, or binary class labels in classification. Classical statistical approach to modeling survival data aims at estimating the survival function S(t), which is the probability that the time of death is greater than certain time t. More generally, the goal is to estimate S(t x), or survival function conditioned on patient s characteristics, denoted as feature vector x. Assuming that the probabilistic model S(t x) is known, or can be accurately estimated from available data, this model provides complete statistical characterization of the data. In particular, it can be used for prediction and for explanation (i.e., identifying input features that are strongly associated with an outcome, such as death). III. PREDICTIVE MODELING OF SURVIVAL DATA In many applications, the goal is to estimate (predict) survival at a pre-specified time point τ, e.g., survival of cancer patients two years after initial diagnosis, or the survival status of patients one year after bone marrow transplant procedure. Generally τ can be about half of the maximum observed survival time. Next we describe possible formalization of this problem under predictive setting, leading to a binary classification formulation. Classification problem setting: Given the training survival data, (x i,u i,δ i,y i ), i =,...,n, where x i R d, U i R +, δ i {0,}, and y i {,+}, estimate a classification model f(x) that predicts a subject s status at a pre-specified time τ based on the input (or covariates) x. The status of subject i at time τ is a binary class label through the following encoding { +, if U i < τ, y i = (3), if U i τ. Note that U i and δ i are only available for training, not for prediction (or testing stage). So the challenge of predictive modeling is to develop novel classification formulations that incorporate uncertain nature of censored data. In a hypothetical study as shown in Figure 2, suppose a subject s status is given by (3), then there is no ambiguity in the statuses of subject 2 and 6. Likewise, the survival status of subject 5 is known, even though the observation is censored. However, the survival statuses for subjects, 3, and 4 are unknown since the observed survival times are shorter than τ. There are two simplistic ways to incorporate censored data into standard classification formulation: (2)

3 Subject U 4,U 6 τ U 2 U 5 δ 6 =,y 6 = + δ 5 = 0,y 5 =,p 5 = δ 4 = 0,y 4 =,p 4 = T 4 /τ δ 3 = 0,y 3 =,p 3 = T 3 /τ δ 2 =,y 2 = δ = 0,y =,p = T /τ Study time (t) the event by means of survival curves and hazard rates and analyze the dependence (of this event) on covariates by means of regression models []. One of the most popular survivalcurve estimation is the Cox modeling approach based on the proportional hazards model. Once a survival function S(t x) is known or estimated (from training data) it can be used for prediction. Specifically, for new (test) input x the prediction is obtained by a simple thresholding rule { +, if S(t x i ) < r, y i = (5), if S(t x i ) r, Fig. 2. Example of survival data under the predictive problem setting. The goal is to find a model that predicts the subjects statuses at time τ. Treat the censoring time as the actual event time, i.e., replace T i with C i. This approach underestimates the actual event time because T i > C i. Simply ignore the censored data and estimate a binary classifier using only exact observations. This approach yields suboptimal models, as we ignore the information available in the censored data. This paper investigates two different strategies for incorporating censored data in SVM-based classifiers: ) Note that censoring information is available/known for training data, but not known during prediction, the censored data can be regarded as the privileged information under the so-called Learning Using Privileged Information (LUPI) paradigm [6], [7]. 2) We can assign probabilities to reflect the uncertain status of censored data samples. One simple rule is to set the probability of a subject being alive at timeτ proportional to the (known) survival time, as indicated in Figure 2. That is, Pr(y i = x i ) = U i /τ or Pr(y i = + x i ) = U i /τ. The idea is that if U i is small, it is more likely subject i will not survive at time τ. On the other hand, if U i is very close toτ, subjectiwill be alive at timeτ with high probability. Therefore, the survival data (x i,u i,δ i ), i =,...,n, can be translated into (x i,u i,l i ), i =,...,n. For exact observations, l i = y i {,+}, i =,...,m. For censored observations, l i = p i [0,], i = m+,...,n, where p i = Pr(y i = x i ) = U i /τ (4) considers the uncertainty about the class membership of x i. The concept of assigning probability to the uncertain status can be extended to the exact observations. For a exact observation, we have its status y i with probability p i =. Then the survival data are represented as (x i,u i,p i,y i ), i =,...,n. This formalization of censored data leads to the so-called SVM with uncertain labels modeling approach [8]. Both modeling approaches are presented later in Section IV. Finally, we describe application of classical survival analysis under predictive setting (introduced earlier in this section). Classical survival analysis models describe the occurrence of where the threshold value r should reflect the misclassification costs given a priori. In this paper, we assume equal misclassification costs. Hence, the threshold level is set to r = 0.5. This approach will be used to estimate the prediction accuracy (test error) of the Cox model in empirical comparisons presented in Sections V and VI. IV. SVM-BASED FORMULATIONS FOR SURVIVAL ANALYSIS This section presents two recent advanced SVM-based formulations appropriate for predictive modeling of survival data. Presentation starts with a general description of these SVM-based formulations, followed by specific description of incorporating censored data into these formulations. A. SVM+ One strategy to handle the survival data is the setting known as Learning Using Privileged Information (LUPI) developed by Vapnik [6], [7]. In a data-rich world, there often exists additional information about training samples, which is not reflected in the training data. This additional information can be easily ignored by standard inductive methods such as SVM. Effective use of this additional information during training often results in improved generalization [7]. Under the LUPI setting, we are given a set of triplets (x i,x i,y i), i =,...,n, where x i R d, x i R k, and y i {,+}. The (x,y) is the usual labeled training data and (x ) denotes the additional privileged information available only for training data. Note that the privileged information is defined in a different feature space. This SVM+ approach maps inputs, x i and x i, into two different spaces: decision space Z via the mapping Φ(x) : x z, which is the same feature space used in standard SVM; correcting space Z via the mapping Φ (x) : x z, which reflects the privileged information about the training data. The goal of the SVM+ is to estimate a decision function (w z) + b by using the correcting function ξ(z ) = (w z )+d 0 as the additional constraints on the training errors (or slack variables) in the decision space. The SVM+ classifier is estimated from the training data by solving the following

4 optimization problem: minimize 2 w 2 + γ n 2 w 2 +C ξ i i= subject to ξ 0 y i ((w z i )+b) ξ i, i =,...,n ξ i = (w z i )+d, i =,...,n with w R d, b R, w R k, d R, and ξ R n +as the variables. The symbol denotes componentwise inequality and R + denotes non-negative real numbers. Predictive modeling of survival data can be formalized under SVM+/LUPI formulation (6) as explained next. Available survival data (x i,u i,p i,y i ) can be represented as (x i,x i,y i), where x i = (U i,p i ) is the privileged information. Then the problem of survival analysis can be formalized and modeled using the SVM+/LUPI paradigm. B. SVM with Uncertain Labels This section describes novel SVM-based formulation [8] that introduces the notion of uncertain class labels. That is, some instances (training samples) are not associated with definite class labels. For such uncertain labels, only the confidence levels (or probabilities) regarding the class memberships are provided. In the context of survival analysis, exact observations have known class labels, and censored observations have uncertain class labels. For the non-separable survival data, we have the following optimization problem, minimize 2 w 2 +C m ξ i + C i= n i=m+ (ξ i +ξ + i ) subject to ξ 0 y i ((w x i )+b) ξ i, i =,...,m ξ 0 ξ + 0 q i ξ i (w x i )+b q + i +ξ + i, i = m+,...,n. (7) with w R d, b R, ξ R m +, ξ R+ n m, and ξ + R+ n m as the variables. The first part of the constraints is for the exact observations. As for the censored observations, their decision values, (w x i )+b, are bounded by q i and q + i. The boundaries are functions of p i, a, and η, i.e., q i = ( ) a log p i η, q + i = ( ) a log p i +η, where a = log(/η ) is a constant and η is the max deviation of the probability estimate from p i [8], [9]. The p i values defined in (4) encode the information about survival time for both censored and exact observations, available in the training data. This formulation can be extended to nonlinear (kernel) parameterization using standard SVM methodology. This method is known (and will be referred to) as psvm in this paper. (6) V. EMPIRICAL COMPARISONS FOR SYNTHETIC DATA This section describes the empirical comparisons between the psvm, SVM+/LUPI method and the Cox modeling approach []. Practical application of these methods to finite data, involves additional simplifications, as discussed next: For SVM+, the non-linearity is modeled only in the correcting space [0]. That is, in all experiments the decision space uses linear parameterization, and the correcting space is implemented via non-linear (RBF) kernels. psvm uses either linear or non-linear mapping in the experiments. Consequently, psvm with RBF kernel has three tuning parameters, C, C, and σ (RBF width parameter), whereas SVM+ with RBF kernel has three tuning parameters, C, γ, and σ. Furthermore, psvm with linear kernel has two tuning parameters (C and C). In contrast, there is no tunable parameter in the Cox modeling approach. Empirical comparisons are designed to understand relative advantages and limitations of SVM-based methods for modeling the survival data sets with various statistical characteristics, such as the number of training samples, the noise in the observed survival times, and the proportion of censoring. The synthetic data set is generated as follows []: Set the number of input features d to 30. Generate x R d with each element x i being a random number uniformly distributed within [, ]. Define the coefficient vector as β = [,,2,3,3,,,,,0,2,0,2,2,0, 2,0,0,0,0,0,0,0,0,0,0,0,0,0,0]. Generate the event time T following Exp((β x) + 2) distribution. The Gaussian noise ν N(0,0.2) is also added to the event time T. Generate the censoring time C following Exp(λ) distribution. The survival time and event indicator are obtained according to () and (2). The rate of the exponential distribution, λ, is used to control the proportion of censoring in the training set. Assign class label to each data vector by the rule in (3). The time of interest, τ, is set to the median value among the survival times. In this way, the prior probability for each class is about the same. Generate 400 samples for training, 400 for validation, and 2000 for testing. This data set conforms to probabilistic assumptions (i.e., exponential distribution) underlying the classical modeling approach. So the Cox modeling approach is expected to be very competitive for the synthetic data set. The following experimental procedure was used in all experiments: Estimate the classifier using the training data. Find optimal tuning parameters for each method using the validation data. For the Cox modeling approach, the validation data are not used.

5 TABLE I THE TEST ERRORS (%) FOR THE SYNTHETIC DATA WITH 400 TRAINING SAMPLES. Cox psvm linear psvm rbf LUPI TABLE II THE TEST ERRORS (%) FOR THE SYNTHETIC DATA WITH 250 TRAINING SAMPLES. Cox psvm linear psvm rbf LUPI TABLE III THE TEST ERRORS (%) FOR THE SYNTHETIC DATA WITH 00 TRAINING SAMPLES. Cox psvm linear psvm rbf LUPI TABLE IV THE TEST ERRORS (%) FOR THE SYNTHETIC DATA WITH 50 TRAINING SAMPLES. Cox psvm linear psvm rbf LUPI Estimate the test error of the final model using the test data. The SVM+/LUPI has three tunable parameters, C, γ, and σ. These parameters are estimated using the validation data, and we consider C in the range of [0,0 2 ], γ in [0 3,0 ], and σ in [2 2,2 2 ] for model selection. For psvm with RBF kernel, we consider C and C in the range of [0,0 2 ], and σ in [2 2,2 2 ]. Further, the experiment is performed ten times with different random realizations of the training, validation, and test data. In this experiment, the average proportion of the censored observation is 6.% (or about 64 observations in the training set are censored). The test errors for ten trials are shown in Table I. The average test errors in percentage (along with standard deviations) for the Cox model, psvm with linear kernel, psvm with RBF kernel, and LUPI are 27.5±.0, 25.6±.4, 26.±0.9, and 26.2±.4, respectively. The psvm with linear kernel achieves the lowest test error among the methods in most trials. Comparing the psvm method with different kernels, it is not surprising to find that psvm with linear kernel performs better than that with RBF kernel. Because our synthetic data is generated from a nearly linear model and there is intrinsic linearity in the data. Methods with linear kernel are expected to perform better than those with RBF kernel. The Cox model has the highest test error in most trails. The results illustrate potential advantage of using the SVMbased methods. Note that SVM-based methods yield similar or superior performance vs. classical Cox models, even thought the training and test data is generated using exponential distributions (for which the Cox method is known to be statistically optimal). A. Number of Training Samples To investigate the effect of training sample size on the test errors, the training sample size is reduced to 250, 00 and 50. The validation sample sizes are changed accordingly. The results are reported in Table II, III and IV. For 250 training samples, the average test errors for the Cox model, psvm with linear kernel, psvm with RBF kernel, and LUPI are 29.2±.0, 27.9±., 28.3±.3, and 28.7±.9, respectively. The psvm with linear kernel has the best performance in five trials. The relative performances between the psvm with RBF kernel and LUPI are roughly the same. However, the performance gap between the Cox model and the psvm with linear kernel is closing when the size of the training data is reduced. This observation is more evident when the sample size is reduced to 00. For 00 training samples, the Cox model has the lowest test error in four trials, whereas the psvm with linear kernel has the best performance in three trials only. When the training sample size is further reduced to 50, both the Cox model and the psvm with linear kernel are outperformed by the psvm with RBF kernel. This can be attributed to the high dimensionality of the input (feature) vectors. With high dimensional input vectors, methods with linear kernel fail to capture the linearity of the data when only 50 samples are available for training. It is also expected

6 TABLE V TEST ERRORS AS A FUNCTION OF TRAINING SAMPLE SIZE. Training size Censoring 6.6% 5.9% 6.4% 6.% Cox 38. ± ± ± ±.0 psvm linear 37.3 ± ± ± ±.4 psvm rbf 35.8 ± ± ± ± 0.9 LUPI 38.3 ± ± ± ±.4 TABLE VI TEST ERRORS AS A FUNCTION OF NOISE LEVEL. Noise level Censoring 5.9% 6.0% 7.2% 7.7% Cox. ± ± ± ±.3 psvm linear 4.2 ±.0 2. ± ± ±. psvm rbf 5. ± ± ± ±.4 LUPI 4.3 ± ± ± ± 2.0 that the estimated Cox model is not accurate due to the small sample size. Table V shows the relative performance of the five methods, as a function of sample size. The psvm with linear kernel outperforms all other methods when the training sample size is larger than 250. This is not surprising, because the linear space matches the synthetic data model. As expected, with increasing number of training samples, the relative advantage of the SVM-based methods is more noticeable. Nonetheless, the Cox model is more competitive for moderate training sample size (00). B. Noise Level in the Survival Time To examine the effect of noise level in the survival time on the test errors, noise with different variances are added to the survival time. The noise variance ranges from 0 to 0.5 and the training and validation sample sizes are kept at 250. The test errors are summarized in Table VI. It is evident that the test errors are reduced in all methods when the noise variance is decreased. When there is no noise in the survival time, the data are generated from a distribution that follows the Cox modeling assumption. It is expected that the Cox model achieves the lowest test error under low-noise scenario. However, the increasing of noise level has much larger negative effect in the Cox modeling approach. The test error is increased from % to 36% when the noise level is raised from 0 to 0.5. Meanwhile, for the same changes in the noise levels, the test errors of the SVM-based approaches are raised from 4% to 35%. Apart from the zero-noise scenario, the psvm with linear kernel achieves the lowest average test error when the noise variance is less than 0.2. The LUPI, however, has the best performance when the noise level is higher than 0.2. It can be concluded that the SVM-based methods show more robustness to noisy data. C. Proportion of Censoring We also adjust the proportion of censoring in the training data to investigate the effect of censoring on the test errors. The percentage of censoring observations in the training data varies from 6% to 46% in our experiment. The noise variance is set to 0.2 and the training and validation sample sizes are kept at 250. The experiment results are summarized in Table VII. TABLE VII TEST ERRORS AS A FUNCTION OF CENSORING RATE. Censoring 6.% 30.6% 38.6% 46.0% Cox 27.4 ± ± ± ±.0 psvm linear 26. ± ± ± ± 2.4 psvm rbf 26.9 ± ± ± ±.4 LUPI 28.0 ± ± ± ±.5 When less than 30% of the training data are censored, the psvm linear gives the lowest test error. On the contrary, if a large portion of the observations are censored (about 40% or more), the psvm with RBF kernel outperforms all other methods. With more censored observations in the training set, more observed survival times are obtained by the non-linear operator in (). Hence, the linearity within the data is no longer maintained, and methods with non-linear parameterization (kernel) are expected to achieve better performances. VI. REAL-LIFE DATA SETS This section describes empirical comparisons using four real-life data sets from the Survival package in R [2]. For all comparisons, the common decision space for SVM+ uses the linear kernel while the unique correction space uses the RBF kernel. For the psvm method, both linear and the RBF kernels are investigated. In all experiments, the time of interest τ was set to the median of the observed survival times. Our experiments for the four medical data sets follow the following procedure [2], [0]: Use five-fold cross-validation to estimate the test errors. Within each training fold, the parameter tuning (model selection) is performed through a five-fold resampling. Our experimental set-up uses double resampling procedure [2]. One level of resampling is used for estimating the test error of a learning method, and the second level is for tuning the model parameters (or model selection). During the model selection stage, the possible choices of tuning parameters are C and C in the range of [0,0 2 ], γ in [0 3,0 ], and σ in [2 2,2 2 ]. Since there is no definite class label for the censored observation with U i < τ, the test errors are reported based on samples with definite labels, i.e., exact observations and censored observations with U i τ. Further, model parameters are selected based on the performance with those samples with well-defined labels. ) Veteran Data Set: The veteran data set is from the Veterans Administration Lung Cancer Study which is a randomised trial of two treatment regimens for lung cancer. In the veteran data set, there are 37 instances (observations) and each instance has 0 attributes. Less than 7% of the instances are censored. Among the nine censored instances, one has the observed survival time less than the time of interest. In other words, only one instance is associated with the uncertain class label in the veteran data set. 2) Lung Data Set: The lung data set studied the survival and usual daily activities in patients with advanced lung cancer by the North Central Cancer Treatment Group (NCCTG). There are 67 instances in this data set, and each instance has 8 attributes. About 28% of the instances are censored, and 2 censored instances are linked to uncertain class labels.

7 TABLE VIII SUMMARY OF THE Survival DATA SETS AND THE EXPERIMENT RESULTS. Data set Veteran Lung PBC Stanford2 Size Attributes δ = Censored % Uncertain cls Cox 23.4 ± ± ± ± 4.7 psvm linear 27.2 ± ± ± ± 7.4 psvm rbf 32.0 ± ± ± ± 6.2 LUPI 30.4 ± ± ± ± 7.7 3) PBC Data Set: The pbc data set is from the Mayo Clinic trial in primary biliary cirrhosis (PBC) of the liver conducted between 974 and 984. The pbc data set contains 258 instances and each instance has 22 attributes. More than half of the instances are censored, and 54 censored instances do not have the definite class labels. 4) Stanford2 Data Set: The fourth data set is the stanford2 data set from the Stanford Heart Transplant data, which contains 57 instances, each with 2 attributes. More than 35% instances are censored and 8 of them are associated with the uncertain labels. The descriptions of the data sets are summarized in Table VIII. The fourth row indicates the proportions of censored observations in the data sets. The fifth row shows the number of censored observation with U i < τ when τ is set to the median of the observed survival times. Table VIII also shows the test errors from different methods applied to the four data sets. Note that the SVM-based approaches achieve the lowest test error in three of the four data sets. On the other hand, the Cox model gives the best performance in the veteran data set. In these experiments, the number of training samples is fixed, so we cannot make any conclusions regarding the effect of sample size on methods performance. However, we can make inferences about inherent non-linearity in some of the data sets. For example, for the stanford2 data set, non-linear psvm performs much better than other methods using linear parameterization. So we can infer this data set requires nonlinear modeling. These results illustrate the effect of censoring on generalization performance. For small proportion of censoring (such as 6%) in the data, the Cox model gives the lowest test error. However, the SVM-based methods show their advantages when the proportion of censoring is increased. Further, relative advantage of SVM-based approaches becomes quite evident for higher-dimensional survival data. These results also show large variability of estimated test errors, due to partitioning of available data into five (training, test) folds. This variability is reflected in large standard deviations of test error rates. Direct comparisons suggest that SVM-based methods yield smaller or similar test error in each (training, test) fold. Another reason for variability of the SVM-based model estimates is due to model selection via resampling. Notably, standard deviations of error rates for all SVM-based methods shown in Table VIII are consistently higher than standard deviations for the Cox model (which has no tunable parameters). This underscores the importance of robust model selection strategies for SVM-based methods, which would be the focus of our future work. VII. DISCUSSION AND CONCLUSIONS This paper proposes predictive modeling of highdimensional survival data as a binary classification problem. We apply the LUPI formulation and SVM with uncertain class labels to solve the problem. Both methods incorporate the information about survival time to estimate an SVM classifier. We have illustrated the advantages and limitations of these modeling approaches using synthetic and real-life data sets. Advanced SVM-based methods appear very effective when the proportion of censoring in training data is large, or the observed survival time does not follow the classical probabilistic assumptions, e.g., the exponential distribution [], []. On the other hand, with fewer censored observations the Cox modeling approach may perform better. Further, the relative performance of LUPI and psvm depends on the intrinsic linearity/non-linearity of the data itself. In particular, superior performance of the psvm with RBF kernel for the stanford2 data indicates an intrinsic non-linearity of this data set. The equal misclassification cost is assumed throughout this paper; however, realistic medical applications use unequal costs. We will incorporate different misclassification costs into the proposed SVM-based formulations. Further, our methodology for predictive modeling of survival data can be readily extended to other (non-medical) applications, such as predicting business failure (aka bankruptcy) or predicting marriage failure (aka divorce). REFERENCES [] O. Aalen, Ø. Borgan, H. Gjessing, and S. Gjessing, Survival and Event History Analysis: A Process Point of View, ser. Statistics for Biology and Health. Springer-Verlag New York, [2] V. Cherkassky and F. Mulier, Learning from data: concepts, theory, and methods. Wiley, [3] F. Khan and V. Zubek, Support Vector Regression for censored data (SVRc): A novel tool for survival analysis, in Data Mining, ICDM 08. Eighth IEEE International Conference on, Dec. 2008, pp [4] J. Shim and C. Hwang, Support vector censored quantile regression under random censoring, Comput. Stat. Data Anal., vol. 53, no. 4, pp , Feb [5] P. K. Shivaswamy, W. Chu, and M. Jansche, A support vector approach to censored targets, in Proceedings of the 2007 Seventh IEEE International Conference on Data Mining, ser. ICDM 07. Washington, DC, USA: IEEE Computer Society, 2007, pp [6] V. N. Vapnik, Estimation of dependences based on empirical data, Empirical inference science: afterword of Springer, [7] V. Vapnik and A. Vashist, 2009 special issue: A new learning paradigm: Learning using privileged information, Neural Networks, vol. 22, no. 5-6, pp , July [8] E. Niaf, R. Flamary, C. Lartizien, and S. Canu, Handling uncertainties in SVM classification, in Statistical Signal Processing Workshop (SSP), 20 IEEE, June 20, pp [9] J. C. Platt, Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods, in Advances in Large Margin Classifiers. MIT Press, 999, pp [0] L. Liang, F. Cai, and V. Cherkassky, Predictive learning with structured (grouped) data, Neural Networks, vol. 22, no. 5-6, pp , [] M. Zhou, Use software R to do survival analysis and simulation. a tutorial, mai/rsurv.pdf. [2] T. M. Therneau, A Package for Survival Analysis in R, 203, r package version [Online]. Available:

Источник: [https://torrent-igruha.org/3551-portal.html]

Predictive learning by vladimir cherkassky pdf free download

3 thoughts to “Predictive learning by vladimir cherkassky pdf free download”

Leave a Reply

Your email address will not be published. Required fields are marked *