Discrete Event Simulation *We are currently working on Read me Instructions for the DES SAS Code and will upload the SAS code soon*

About DES

Discrete event simulation (DES) is a method used to model real world systems that can be decomposed into a set of logically separate processes that autonomously progress through time. Each event occurs on a specific process, and is assigned a logical time (a timestamp). The result of this event can be an outcome passed to one or more other processes. The content of the outcome may result in the generation of new events to be processed at some specified future logical time. The underlying statistical paradigm that supports DES is based in queuing theory. The approach has been used historically to evaluate telephone scheduling of and more recently computer network job allocation. A useful queuing model both (a) represents a real-life system with sufficient accuracy and (b) is analytically tractable. A queuing model based on the Poisson process and its companion exponential probability distribution often meets these two requirements. A Poisson process models random events (such as a customer arrival, a request for action from a web server, or the completion of the actions requested of a web server) as emanating from a memoryless process. That is, the length of the time interval from the current time to the occurrence of the next event does not depend upon the time of occurrence of the last event. In the Poisson probability distribution, the observer records the number of events that occur in a time interval of fixed length. In the (negative) exponential probability distribution, the observer records the length of the time interval between consecutive events. In both, the underlying physical process is memoryless. Models based on the Poisson process often respond to inputs from the environment in a manner that mimics the response of the system being modeled to those same inputs. The analytically tractable models that result yield both information about the system being modeled and the form of their solution. Even a queuing model based on the Poisson process that does a relatively poor job of mimicking detailed system performance can be useful. The fact that such models often give "worst-case" scenario evaluations appeals to system designers who prefer to include a safety factor in their designs. Also, the form of the solution of models based on the Poisson process often provides insight into the form of the solution to a queuing problem whose detailed behavior is poorly mimicked. As a result, queuing models are frequently modeled as Poisson processes through the use of the exponential distribution.

DES for Clinical Trial Simulation Application to Phase I Pediatric Trials

The primary objective of phase I pediatric oncology trials is often to define the maximum tolerated dose (MTD). Recent emphasis on more informative trial designs has highlighted the importance of historical data (prior information) and the use of clinical trial simulation to guide the prospective design of studies at all phases of development. Trial simulations typically rely on Monte Carlo-based simulations and may or may not incorporate Markov processes. A major concern for pediatric phase I oncology trials is not necessarily the identification of the MTD (although this is a necessary outcome), but the time it takes to complete such trials and the resultant inter-trial competition for patients.

Consider a study population of patients without any prior expectation of toxicity to an experimental agent or expectation of inevaluability. We can assume that the duration of study of any such patient is an exponential random variable T with parameter l . To construct a stochastic simulation model for this situation, we assume that the n th patient has a study evaluation time of T n , an exponential random variable with parameter l and that T 1 , T 2 , . . . ., T n are mutually independent for all values of n. Here the event-rate parameter l is the same for all patients. When a study is initiated at time t = 0, patients are screened, enrolled, and evaluated in a sequence dictated by decision rules regarding event rates (the observation of DLTs in the oncology example) so that at any time t, we can observe

N(t) = The number of patients in the study at time t

Then (t 1 , t 2 ), the average number of patients in the study during the interval from t 1 to t 2 can be defined as:

(t 1 , t 2 ) =

 

Every patient is assumed to have a positive event-time, so N(0) = 0 and N(t) is a discrete random variable that depends on a continuous parameter t = 0. Of course, we can further partition the number of patients in the study into mutually exclusive categories which discriminate patient status either by time or event outcome

N E (t) = The number of enrolled patients under evaluation at time t

N IE (t) = The number of inevaluable patients at time t

N C (t) = The number of patients completed (without DLT) at time t

N DLT (t) = The number of patients with a DLT at time t

where

N E (t) + N IE (t) + N C (t) + N DLT (t) = N(t)

Likewise, the averages, E (t 1 , t 2 ), IE (t 1 , t 2 ), C (t 1 , t 2 ), and DLT (t 1 , t 2 ), can be defined in a manner analogous to (t 1 , t 2 ).

We can then define a complementary continuous random variable, S n , which depends on the discrete parameter n as the sum of the event times of the first n patients evaluated.

S n = T 1 + T 2 + . . . + T n

Since each T k is an exponential random variable, S n is a gamma random variable with parameters n and l . For any time t = 0, these events are the same

{Sn = t} = {N(t) = n}

Hence, if the sum of the event-times of the first n patients was at most t, then at least n patients have reached event by time t and vice versa. From this equality we observe that a family of continuous random variable (left side of equation) is indexed by the discrete parameter n while on the right side of the equation a family of discrete random variables is indexed by the continuous variable t. From the gamma density formula and the equation above, we can derive the probability mass function for the random variable N(t)

P {N(t) = n} =

which shows that N(t) has a Poisson distribution with parameter l t.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Article Title: A SAS-based solution to evaluate study design efficiency of phase I pediatric oncology trials via discrete event simulation

Authors: Barrett JS, Jayaraman B, Patel D, Skolnik J

Abstract

Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the ''rolling 6'' design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.