\documentclass[a4paper, oneside, 10pt]{article}
\usepackage[unicode]{hyperref}
\usepackage[utf8x]{inputenc}
\usepackage[english]{babel}
\date{\today}
\title{}
\author{}
\begin{document}
\section{\texorpdfstring{Schedule Details}{Schedule Details}}
\label{sec:schedule_details}
\subsection{\texorpdfstring{Day 1 Overview:}{Day 1 Overview}}
\label{sec:day_1_overview}
\emph{\textbf{\underline{Models, data, and uncertainty}}}
Morning:
\begin{itemize}
\item 9:00-9:30AM Introduction and logistics
\item 9:30-10:30AM Introduction to the course: Data, models and uncertainty
\end{itemize}
We will introduce basic concepts of ecological modelling, focusing on how models are used to learn from data. The concept of stochasticity will be introduced.
\begin{itemize}
\item 10:30-11:00AM Break
\item 11:00-12:00 PM Deterministic models
\end{itemize}
We will introduce basic concepts of ecological modelling, focusing on how models are used to learn from data. Participants will identify simple deterministic models in their individual fields to illustrate that there is a compact set of mathematical relationships that are widely used by ecologists to portray relationships among variables and the operation of processes in ecology (e.g., simple linear relationships, asymptotic functions, models of competing controls, polynomials, change-points, simple dynamic models).
\begin{itemize}
\item 12-1:00PM Lunch
\end{itemize}
\begin{center}
\line(1,0){250}
\end{center}
Afternoon:
\begin{itemize}
\item 1:00-2:00 PM Lecture: Probability distributions
\end{itemize}
Participants will learn basic theory of statistical distributions, beginning with a review of general concepts of probability density functions, and discrete density functions, cumulative distribution functions, and quantile functions. Moment matching.
\begin{itemize}
\item 2:00-2:15 PM Break
\item 2:15-5:00 PM Lab: Probability distributions
\end{itemize}
Specific distributions for discrete and continuous data will be taught using a series of problems based on ecological data where participants compute probabilities and probability densities using functions in R. Problems will challenge participants to understand the relationships between shape parameters and moments of distributions and how to calculate one from another. This understanding will also be useful for our later discussions of choosing priors.
\begin{center}
\line(1,0){250}
\end{center}
\begin{center}
\line(1,0){250}
\end{center}
\subsection{\texorpdfstring{Day 2.}{Day 2}}
\label{sec:day_2}
\emph{\textbf{\underline{Likelihood and Bayes intro}}}
Morning:
\begin{itemize}
\item 9:00-9:30 AM Doubts \& Review
\item 9:30-10:30 AM Lecture: Introduction to likelihood
\end{itemize}
The concepts of likelihood functions and maximum likelihood estimation will be introduced, anticipating the role of likelihood in Bayes theorem.
\begin{itemize}
\item 10:30 AM-12:30 PM Lab: Probability and likelihood
\end{itemize}
Participants will plot probability distributions and likelihood profiles for diverse ecological datasets to understand the difference between the probability of data (holding parameters constant) and the likelihood of the parameters (holding data constant). They will work through a spreadsheet example to clearly reveal how parameters are estimated in the likelihood framework using optimization methods.
\begin{itemize}
\item 12:30-1:30 PM Lunch
\end{itemize}
\begin{center}
\line(1,0){250}
\end{center}
Afternoon:
1:30-2:30 PM Derive Bayes Theorem
Bayes theorem will be derived graphically and algebraically. We will teach the component distributions of Bayes theorem applied to models and data: the posterior, the likelihood, the prior, and the marginal distribution of the data.
\begin{itemize}
\item 2:30-3:00 PM Break
\item 3:00-5PM Lab: Bayes theorem
\end{itemize}
Simple examples will be offered to assure that participants thoroughly understand each component of Bayesian analysis applied to models and data. Anticipating work on Markov Chain Monte Carlo later in the course, we will particularly stress the role of the marginal distribution of the data as a normalizing constant for the posterior. Sensitivity to prior and data (likelihood).
\begin{center}
\line(1,0){250}
\end{center}
\begin{center}
\line(1,0){250}
\end{center}
\subsection{\texorpdfstring{Day 3.}{Day 3}}
\label{sec:day_3}
\emph{\textbf{\underline{Priors and MCMC intro}}}
Morning:
\begin{itemize}
\item 9:00-9:30AM Doubts \& Review
\item 9:30-10:30 AM Conjugacy and moment matching
\end{itemize}
As background for Gibbs sampling, we will introduce conjugate prior-likelihood relationships.
\begin{itemize}
\item 10:30AM-12:30 PM Lab: Conjugacy and moment matching
\end{itemize}
Using data from one or two examples, participants will choose an appropriate likelihood function and prior will calculate a posterior distribution of parameters using conjugacy. They will then estimate prevalence using all of the components of Bayes theorem, integrating the marginal distribution of the data using numerical integration in R. These estimates will be compared with estimates obtained using conjugate prior-likelihood relationships. Throughout the day, similarities and differences between maximum likelihood estimation and Bayesian estimation will be discuss
\begin{itemize}
\item 12:30-1:30PM Lunch
\end{itemize}
\begin{center}
\line(1,0){250}
\end{center}
Afternoon:
\begin{itemize}
\item 1:30-3:00 PM MCMC
\end{itemize}
Simple, step by step examples will illustrate how the Metropolis algorithm works, and these examples will be expanded to include Metropolis-Hastings and Gibbs sampling.
\begin{itemize}
\item 3-3:15 PM Break
\end{itemize}
\begin{itemize}
\item 3:15-5 PM Lab: MCMC
\end{itemize}
Participants will construct a Gibbs sampler to estimate parameters in a simple problem and will use this problem to understand critical concepts including initialization, burn-in, mixing, and convergence.
\begin{center}
\line(1,0){250}
\end{center}
\begin{center}
\line(1,0){250}
\end{center}
\subsection{\texorpdfstring{Day 4.}{Day 4}}
\label{sec:day_4}
\emph{\textbf{\underline{MCMC and JAGS}}}
Morning:
\begin{itemize}
\item 9:00-9:30AM Doubts \& Review
\item 9:30-10:00 AM Intro to JAGS
\end{itemize}
\begin{itemize}
\item 10:00-10:15 Break
\end{itemize}
\begin{itemize}
\item 10:15-12:30 Lab: JAGS
\end{itemize}
Participants will work though a tutorial on MCMC software (JAGS and relevant R packages) using Bayesian linear and non-linear models as examples. Data sets available for problems will cover a number of examples.
\begin{itemize}
\item 12:30-1:30PM Lunch
\end{itemize}
\begin{center}
\line(1,0){250}
\end{center}
Afternoon:
\begin{itemize}
\item 1:30-3:30 PM Intro to JAGS (cont)
\item 3:30-4:00 Break
\end{itemize}
\begin{itemize}
\item 4:00-5:00PM Bayesian Regression
\end{itemize}
\begin{center}
\line(1,0){250}
\end{center}
\begin{center}
\line(1,0){250}
\end{center}
\subsection{\texorpdfstring{Day 5.}{Day 5}}
\label{sec:day_5}
\emph{\textbf{\underline{Hierarchical Bayes \& MODEL EVALUATION AND SELECTION}}}
\begin{itemize}
\item 9:00-9:30AM Doubts \& Review
\end{itemize}
Test understanding of evening exercise with an example. Use example from MH from Day 4 evening.
\begin{itemize}
\item 9:30-10:30AM Hierarchical models
\end{itemize}
Hierarchical structures and the concept of hyper-parameters will be introduced. We will begin with a simple example of estimating a mean assuming only sampling variation. We will then estimate the mean incorporating variation among individuals, modeling variation in individual-level parameters around an overall or “global” mean. This example will be extended by encouraging participants to “discover” group level effects [also known as random effects, (Gelman and Hill, 2009)] in a simple linear regression, where the intercept terms differ among sites, but are drawn from a shared distribution. Discuss relationships between random and fixed effects.
\begin{itemize}
\item 10:30-10:45AM Break
\end{itemize}
\begin{itemize}
\item 10:45-12:30PM Hierarchical models lab
\end{itemize}
Participants will work though a tutorial on MCMC software (JAGS or OpenBUGS and relevant R packages) using Bayesian linear and non-linear models as examples.
\begin{itemize}
\item 12:30-1:30PM Lunch
\end{itemize}
\begin{center}
\line(1,0){250}
\end{center}
Afternoon:
\begin{itemize}
\item 1:30-3:00PM Model Evaluation \& selection
\end{itemize}
Posterior predictive checks and Bayesian p-values will be introduced as a way to check goodness-of-fit of models and to evaluate choices of model structure. We will emphasize that the problem of model selection is not as straightforward as many ecologists might believe and there is no consensus among statisticians on a single, preferred approach. (Link and Barker, 2006).
\begin{itemize}
\item 3-3:15 PM Break
\item 3:15-5:00PM Lab: Model evaluation and selection
\end{itemize}
These methods will be illustrated using models fit on prior days. Methods for model selection in the Bayesian framework will be discussed including the deviance information criterion, posterior predictive loss, posterior model probabilities, and Bayes factors.
\begin{center}
\line(1,0){250}
\end{center}
\begin{center}
\line(1,0){250}
\end{center}
\subsection{\texorpdfstring{Day 6}{Day 6}}
\label{sec:day_6}
\emph{\textbf{\underline{Latent states, process models, and data models}}}
\begin{itemize}
\item 9:00-9:30AM Doubts \& Review
\item 9:30-10:30AM Lecture (Break it up and integrate with lab).
\end{itemize}
A general framework for linking ecological models to data will be presented, where unobservable latent states are portrayed by process models that are linked to observable quantities by data models (Cressie et al., 2009, equation 4), i.e., P (parameters, process , data) / P (datajprocess, data parameters) (1)\_ P ( processjprocess parameters)\_ P (all parameters). Reinforce buiding HB, DAG, etc.
\begin{itemize}
\item 10:30-10:45AM Break
\item 10:45-12:30 PM Lab
\end{itemize}
This highly general framework will be initially illustrated with an exercise incorporates process error..
\begin{itemize}
\item 12:30-1:30PM Lunch
\end{itemize}
\begin{center}
\line(1,0){250}
\end{center}
Afternoon:
\begin{itemize}
\item 1:30-2:30PM Lecture: Occupancy models
\end{itemize}
Occupancy model development as a hierarchical problem.
\begin{itemize}
\item 2:30-4:30PM Lab: Occupancy models
\end{itemize}
This highly general framework will be initially illustrated with an exercise on modeling habitat occupancy by birds. The true state of a habitat (occupied or not) will be modeled as a function of covariates describing the habitat. The observed state (detected or undetected) will be modelled to estimate the probability that the bird is observed given that it is present. The data model will then be expanded to model detection probability as a function of covariates. This scenario is an exemplar for demonstrating the utility of a hierarchical model.
\end{document}