- Estimate 80% kernel home ranges with a bandwidth parameter of 200 meter and 20 meter. What are the differences, and which bandwidth do you think is more appropriate given the data?

The area of the 80% kernel home ranges estimated with a 200 meter bandwidth are much larger than those with a 200 meter bandwidth. The 80% kernel homeranges with h=200 are 8-15 times larger than those estimated with h=20:

```
# Estimate utilisation distribution with 200 m bandwidth:
kud.manual.h200 = kernelUD(GPSData['ID'],grid=fullgrid,h=200)
# Estimate utilisation distribution with 200 m bandwidth:
kud.manual.h20 = kernelUD(GPSData['ID'],grid=fullgrid,h=20)
# Estimate the area of the 80% kernel home range:
khr.80.h200.area <- kernel.area(kud.manual.h200, percent=c(80))
khr.80.h20.area <- kernel.area(kud.manual.h20, percent=c(80))
# calculate the ratio of the two estimates:
(khr.80.h200.area/khr.80.h20.area)
```

Plot the 80% kernel home ranges for the two bandwidth parameters:

```
# calculate the contours of the homeranges for plotting
khr.80.h200.contours=getverticeshr(kud.manual.h200, percent=80)
khr.80.h20.contours=getverticeshr(kud.manual.h20, percent=80)
# plot the basemap
image(BathymetryMap,xlim=c(115000,120000),ylim=c(547000,555000))
# plot the homerange (example again for bird 406) on the base map
plot(khr.80.h200.contours["406",],add=T)
plot(khr.80.h20.contours["406",],add=T)
# plot the underlying data points
points(GPSData[which(GPSData$ID=="406"),]@coords,col='red',pch='.')
```

The 200 meter bandwidth is too large, and provides not a good estimate of the utilisation distribution. The 80% kernel home range should be the **smallest** where an animal is found at least 80% of the time. Especially around the high tide roosts you see the kernel home range is overestimated in size: smaller home range areas could be drawn around the roosts, which contain the same number of points.

- Explore how home range sizes varies at different percentage levels. Is there a cut-off point when the size suddenly becomes much larger? If so, why would this happen?

Let’s calculate the kernel home range areas at different percentage levels, and plot how the kernel home range increases:

```
percentage.levels=seq(10,95,5) # a sequence from 10 - 95 in steps of 5
khr.areas=kernel.area(kud.manual.h20, percent=percentage.levels)
# plot the kernel home range size at different percentage levels, for the first bird (406)
plot(percentage.levels,khr.areas[,1],xlab="percentage level", ylab="kernel home range")
```

Above around 70% the home range area steeply increases, while at the lower percentage levels teh area is not changing much. It is therefore that the 80% or 95% kernel home range is often taken to be a measure of the general area in which the animal lives, while the lower percentage level (e.g. 20%) kernel home range is a measure of the areas that are intensively used (core areas).

- Split the utilisation distribution into two components, by estimating separately a distribution for the times when the birds were active, and the times when birds were standing still. Estimate the kernel bandwidth for each distribution by least-squares cross-validation. Are estimated bandwidths are very different in the two cases? Explain whether you can expect such a difference in estimated bandwith, given the differences in the Oystercatchers movement pattern between the two cases.

First inspect the values of `ACCFractionActive`

, e.g. by plotting a histogram:

```
hist(GPSData$ACCFractionActive)
ACC.median=median(GPSData$ACCFractionActive,na.rm=T)
```

The values are somewhat bimodally distributed, with a large peak at `ACCFractionActive`

>0.9, which are the very active birds. To split the birds into two equally sized groups, let’s use the median `ACC.median`

as the classification threshold.

```
# make two subselections, active and inactive
selected.indices=which(GPSData@data$ACCFractionActive>ACC.median)
GPSDataActive=GPSData[selected.indices,]
selected.indices=which(GPSData@data$ACCFractionActive<ACC.median)
GPSDataInActive=GPSData[selected.indices,]
# and plot the two subselections:
image(BathymetryMap,xlim=c(115000,120000),ylim=c(547000,555000))
points(GPSDataActive@coords,col='green',pch='.')
points(GPSDataInActive@coords,col='red',pch='.')
```

Let’s estimate a utilisation distribution for each of these datasets

```
# automated estimation of the bandwidth
kud.auto.active = kernelUD(GPSDataActive['ID'],grid=fullgrid,h="LSCV",hlim = c(0.001, 1.5))
kud.auto.inactive = kernelUD(GPSDataInActive['ID'],grid=fullgrid,h="LSCV",hlim = c(0.001, 1.5))
# plot the bandwidth (for bird 406)
kud.auto.active$"406"@h$h
kud.auto.inactive$"406"@h$h
```

The utilisation distribution (UD) that was estimated on the *active* relocations of bird 406 has a bandwidth of 28.6, while the bandwidth of the UD when the bird was *inactive* equals 6.9. This analysis shows that the best choice for the kernel bandwidth may depend on the movement behaviour of the animal. If an animal moves around considerably, and we take hourly GPS positions (like in this study), each hourly GPS position is only a sample of where the animal has been that hour. Considerably uncertainty remains on the whereabouts of the animal within that hour, and the kernel bandwidth is to represent that uncertainty. On the other hand, if an animal is immobile and always stays at the same location (such as at a high tide roost), each hourly relocation is highly representative of the animal’s true whereabouts within that hour, and the kernel bandwidth can therefore be taken much smaller.

- Is the kernel home range of birds during low tide (when birds are foraging) different in size than during high tide (when birds cannot forage)? Statistically test this difference.

To answer this questions, you must first determine which kernel home range. If you are interested in areas of core and intensive use, you should calculate the home range at a low percentage level. If you are interested in the more general area that the animal uses, you should take a higher percentage level. Let’s take the 80% kernel home range, and calculate the areas.

```
# plot a histogram of the tidal heights:
hist(GPSData@data$RefTidalHeight)
# make two subselections, high tide and low tide
selected.indices=which(GPSData@data$RefTidalHeight>0)
GPSDataLow=GPSData[selected.indices,]
selected.indices=which(GPSData@data$RefTidalHeigh<0)
GPSDataHigh=GPSData[selected.indices,]
# let's use a kernel bandwidth of 30 to estimate the UD:
# (i.e. in between the active and inactive estimates for
# the bandwidths we found earlier - but you may also use
# cross validation to estimate seperate bandwidths for
# each)
kud.low = kernelUD(GPSDataLow['ID'],grid=fullgrid,h=30)
kud.high = kernelUD(GPSDataHigh['ID'],grid=fullgrid,h=30)
# and calculate the 95% kernel home range areas
khr.low.area=kernel.area(kud.low, percent=80)
khr.high.area=kernel.area(kud.high, percent=80)
```

You can apply a simple t-test to check whether the mean active 80% kernel home range is significantly different from the mean inactive 80% kernel home range (using the individual birds as the sampling unit):

```
t.test(unlist(khr.low.area),unlist(khr.high.area))
# the unlist argument is applied to convert the dataframe khr.active.area
# into a simple list of numbers
```

The active 80% kernel home range is larger than the inactive 80% kernel home range (Welch two sample t-test, t=-5.9, df=9, p<0.001)

- Based on your estimation of the home ranges sizes, which birds would you classify as territorial, and which birds not? Is a sample size of 8 birds sufficient to study individual differences in movement patterns?

Let’s plot the 80% kernel home range sizes for the birds as a barplot:

`barplot(unlist(khr.areas['80',]),ylim=c(0,100),xlab="bird",ylab="80% kernel home range [ha]")`

There is clearly variation in 80% kernel home range size - bird 416 uses the smallest area, while bird 424 uses the largest area, which you may also visualize with a plot:

```
image(BathymetryMap,xlim=c(115000,120000),ylim=c(547000,555000))
plot(khr.80.h20.contours['416',],add=T,col='red')
plot(khr.80.h20.contours['424',],add=T,col='green')
```

The variation in kernel home range is not clearly bimodal, and it is therefore difficult to assign birds to a group that behaves more territorial (small home range) and birds that are not (large home range). In general a sample size of 8 birds is too low to study differences within that sample.

- Given that utilisation distributions are normalised probability density functions, and the above definition, which range of values can BA assume? Which value indicates complete overlap?

You can use the property of utilisation distribution that it is a normalised probability distribution. BA will be maximum when two UDs are exactly the same, i.e. UD_{1}=UD_{2}: \[BA=\int_{-\infty}^\infty \int_{-\infty}^\infty \sqrt{UD_1(x,y)} \sqrt{UD_1(x,y)} dx dy\] which equals \[\int_{-\infty}^\infty \int_{-\infty}^\infty UD_1(x,y) dx dy=1\] which equals 1 because the UD is normalised. If UD_{1} and UD_{2} overlap nowhere, then their multiplication at all points (x,y) will be zero, therefore also BA will be zero. So BA will take values between 0 and 1, where 0 implies no overlap and 1 complete overlap.

- Based on the overlap between all pairs, could you group individuals in different categories with a different use of the Balgzand?

Like in question 5) the sample size of 8 is too small to group individuals in different categories. The UD of bird 416 has a low overlap with all the other birds.

```
# calculate overlap between all pairs
ko=kerneloverlaphr(kud.manual.h20,method="BA")
# determine which homeranges have little overlap, using a threshold of BA=0.3
ko<.3 # pairs with low UD overlap: bird 416 has the lowest overlap with other individuals
ko>.6 # pairs with higher UD overlap: birds 431 and 441 have the highest overlap
```

(Extra) For own practice - contact me if you have specific questions

(Extra) For own practice - contact me if you have specific questions

In this study we had a maximum of weeks to sample the food landscape, because over longer time scales the food landscape starts to change as a result of predation / depletion. If it takes approximately two minutes to collect a sample (once a boat has brought you to the right mudflat), how many samples do you think you can (roughly) collect per person in two weeks? How many people do you estimate were in the team that collected the sampling data in this study?

Spatial sampling during fieldwork can be very time consuming, especially in areas with difficult access, like intertidal mudflats:

- within a low tide period you have ca 5 hours available for sampling - excluding travel time between different mudflats by boat, ca 4 hours including travel time
- sampling is usually performed in a team of two people (one person taking notes, one sieving)
- 240 minutes of sampling time / 2 minutes per sample / 2 person per team: 60 samples per day / person
- on windy days the mudflats cannot be accessed, therefore in 14 days you can collect around around 500 samples per person.
- The dataset you are working with (ca 2500 samples) was collected in two weeks with 3 teams of two people and a skipper.

- Most of the sampling was performed on a 50 meter grid. Given the shape of the variogram for cockle density, was this a good choice? Would you have preferred sampling on a grid with larger spacing (such that a larger area of the Balgzand and the homerange of the Oystercatchers could have been covered?)

Following the example code, visually inspect the variagram, and fit a variogram model to determine the variogram range `v.fit$range`

. The variogram range is around 150 metre, where it approaches its asymptote. To resolve spatial patterns you need to sample at a sufficiently fine spatial grid. A grid choice of 50 metre turned out ok in this study: the sampling grid was smaller than the variogram range, which is the distance where most spatial correlation between points is lost. To resolve a spatial pattern, you need to be able to resolve spatial autocorrelation. If you can’t, this indiciates the spatial pattern varies at a finer spatial scale than your sampling grid can resolve.

Doubling the sampling grid in this case to 100 metre would not have been a good idea: then you are already approaching the variogram asymptote.

- Explain why the sample variance approaches the asymptote of the variogram

This follows from the defition of the variogram. The asymptote of the variogram gives the variance between all point pairs in the sample that are separated by a large distance. Because of the large distance the points they are spatially unrelated to each other. The same holds for the sample variance, which does not consider spatial auto-correlation.

- Also make a kriging interpolation for the size of the cockles, i.e.
`CockleLengthMean`

. Compare the variogram for Cockle density and Cockle size. What are the differences, and can you hint at a cause for these differences, given the recruitment cycle of the Cockle?

For making a spatial interpolation of Cockle size, make sure to exclude all points where no cockles were found (this sampling locations have an NA (Not Available) value for cockle length):

```
rows.with.cockles=which(!is.na(DataSampling@data$CockleLengthMean))
DataSamplingSubset=DataSampling[rows.with.cockles,]
```

You can repeat the interpolation procedure with the `DataSamplingSubset`

dataset. You find a variogram range for cockle size is much larger than that for cockle length. This is because different cockle beds found across the mudflats are often of the same age, because they originated from the same spatfall (Dutch: broedval). Large spatfall only occurs once every few years, producing cohorts of new Cockles that all have the same age (and size) over large areas. Because mudflats are intersected by streams and gullies, spatially separated cockle beds form. Spatial correlation in size is therefore stronger than spatial correlation in densities.

(Extra) For own practice - contact me if you have specific questions

Plot the equation and verify it has the shape of a Type II functional response. Why does the functional response reach a plateau? What variable determines the level of the plateau?

```
f <- function(x,a,h) (a*x)/(1+a*h*x)
# the handling time determines the asymptote:
curve(f(x,1,1),from=0,to=10)
curve(f(x,1,2),from=0,to=10,add=T)
# the attack rate determines how steeply the functional response incrases with prey density:
curve(f(x,1,1),from=0,to=10)
curve(f(x,2,1),from=0,to=10,add=T)
```

The functional response asymptote is given by 1/h, with h the handling time of a prey item. To calculate the asymptote, let’s send the prey density n to infinity: \[f=\frac{an}{1+ahn} = \frac{a}{\frac{1}{n}+ah}\] if n is very large, then 1/n equals zero, therefore: \[f=\frac{a}{ah}=\frac{1}{h}\]

- Holling’s disk equation assumes that attack rates and handling times are independent of prey density. Are these reasonable assumptions, under which circumstances?

This assumption is reasonable as long as:

- the predator attacks the prey once it is within a certain distance, irrespective of the prey density. This may not be the case when a predator has difficulty to detect a prey item when it occurs at low density, i.e. the attack rate may depend on prey density. Such predator behaviour may lead to a type III functional response.
- the prey does not behave differently at different prey densities. For example, this may happen when there is competition for hiding space when a predator approaches: at high prey density hiding places may become scarce.

- plot the capture rate that Oystercatchers can realize for several realistic size and density ranges

Plotting cockles from 15 to 35 mm:

```
fCockle <- function(density,length) 0.000860*density/(1 + 0.0001897*density*length^1.792)
curve(fCockle(x,15),from=0, to=300, xlab="Cockle density in cockles/m^2", ylab="capture/s")
curve(fCockle(x,25),from=0, to=300, col='red',add=T)
curve(fCockle(x,35),from=0, to=300, col='blue',add=T)
```

- plot the intake rate in AFDM/s that Oystercatchers can realize for several realistic size and density ranges. Is the longer handling time for large Cockles compensated by a higher flesh content, or not?

```
IFIR <- function(density,length) fCockle(density,length)*Length2AFDM(length)
curve(IFIR(x,35),from=0, to=300, col='blue',xlab="Cockle density in cockles/m^2", ylab="Intake rate in AFDM/s")
curve(IFIR(x,25),from=0, to=300, col='red',add=T)
curve(IFIR(x,15),from=0, to=300, add=T)
```

The intake rate for the larger cockles is higher, although their handling time is longer. This is because the flesh content of a large cockles is so much higher (handling time increasing quadratically with cockle size, but flesh content increases with the cube of the size)

- Use your results from part II to calculate the landscape of interference-free intake rates

```
# use the kriging estimates of cockle density and cockle length at each grid location
# to calculate the interference free intake rate
IFIR.prediction=IFIR(patch4.pred$CockleLengthMean.pred,patch4.pred$CockleNDensity.pred)
# plot the intake rates:
histogram(IFIR.pred)
# add the predictions to the patch4 spatial object
patch4.pred@data$IFIR.pred=IFIR.prediction
```

- When there is no competition or costs of interference (i.e. k approaches zero), how do you expect that identical, intake-rate maximizing birds will distribute?

All birds will concentrate in the single patch with highest intake rate

- Assume we have a system with 2 patches, both of size 1. Further assume the exponential interference constant k=.01, and the interference free intake rate in the patches are 3 and 2. If 100 birds are released in the system, how will they distribute? Note: since all birds are identical, they will distribute such that they all have the same, maximized intake rate. Hint: express the bird density in terms of the interference-free intake rate. To find the maximized intake rate, invoke the condition that the bird densities multiplied by the patch areas should add up to the total number of birds present (100).

Let’s solve the ideal-free distribution of intake-rate maximizers. We have only two patches, but below you find the procedure to solve it for any number and sizes of patches:

First we write down the realised intake rate, in terms of interference-free intake rate (IFIR) and interference \(I(n)\). The realised intake rate in all patches should be the same, let’s call this intake rate \(c\). We then have \[ \begin{eqnarray} IFIR \times I(n) = IFIR \times e^{-k n} = c \\ \end{eqnarray} \] We can then express the bird density \(n\) in terms of \(c\) and \(IFIR\): \[ \begin{eqnarray} e^{-k n} &=& \frac{c}{IFIR} \\ -k n &=& \log\left(\frac{c}{IFIR}\right) \\ n &=& -\frac{1}{k}\log\left(\frac{c}{IFIR}\right) \\ \end{eqnarray} \] Because the interference constant \(k\) and the interference free intake rates (IFIR) are known for each patch, we only need to find the maximized intake rate \(c\). To solve \(c\), we use the fact that the total number of birds on all patches should add up to the total number of birds present in the system (100 birds).

The number of birds \(N\) in a patch equals the bird density times the area \(A\) of the patch: \[ N= A n \] In our case we have two patches. The number of birds in the patches should add up to the total of 100 birds. \[ A_1 n_1 + A_2 n_2 = 100 \] Filling out the earlier expression for \(n\): \[ -\frac{A_1}{k}\log\left(\frac{c}{IFIR_1}\right) -\frac{A_2}{k}\log\left(\frac{c}{IFIR_2}\right) =100\\ \] In this equation the only unknown is \(c\). Filling out all the parameters: \[ -\frac{1}{.01}\log\left(\frac{c}{3}\right) -\frac{1}{.01}\log\left(\frac{c}{2}\right) =100\\ \log\left(\frac{c}{3}\right) +\log\left(\frac{c}{2}\right) =-1\\ \log(c) - \log(3) + \log(c)-\log(2)=-1\\ \log(c) = (\log(3)+\log(2) -1)/2 = 0.395 \\ c=1.485 \] Now that we have found c, we also have found the bird densities in the two patches: \[ n_1 = -\frac{1}{.01}\log\left(\frac{1.485}{3}\right)=70.\\ n_2 = -\frac{1}{.01}\log\left(\frac{1.485}{2}\right)=30. \]

- Predict the distribution of the 400 Oystercatchers in patch 4 for these parameters. What are the most striking differences when compared with the GPS data?

```
# the exponential interference constant k=5 m^2/bird
IFD.pred=IFD(CockleIFIR,Areas,400,5)
# add the IFD prediction to the patch4.pred object:
# the bird densities are stored by the function IFD in the field animaldensity
patch4.pred@data$IFD.pred=IFD.pred$animaldensity
# plot the IFD
breaks=seq(0,max(IFD.pred$animaldensity),max(IFD.pred$animaldensity)/100)
image(patch4.pred["IFD.pred"],xlim=bbox["x",],ylim=bbox["y",],col=terrain.colors(length(breaks)-1),breaks=breaks)
# overlay the GPS points
points(GPSData@coords,col='red',pch=".")
```

From the plot you see that the ideal-free distributed birds concentrate only in the very best patches. Most of the other areas are left empty. This is a common deficiency of simple IFD models that predict animals distributions on a standing stock of prey: intake-rate maximizing birds concentrate too densely in the best patches only. The observations show that birds distribute much more evenly.

- Adjust the interference constant such that it best matches the GPS data (rough estimate required only). If your estimate and the model were to be true, at which distance would Oystercatchers start to experience density dependent competition effects? Is it reasonable to assume this density dependence is the result of direct interactions, like kleptoparasitism?

The exponential interference constant determined in the lab was 5 m^{2}/bird, which tells you that the realised intake rate goes down by a factor 1/e, when the bird density is 1/5 bird/m^{2}, or 1 bird/5 m^{2}.

By increasing the interference constant - to say 1000 m^{2}/bird - you get a much broader distribution of birds that better matches the observations (however, you will never predict any birds on patches without food, although in reality of course this happens quite often, e.g. when birds are not foraging). An interference constant of 1000 m^{2}/bird tells you that the realised intake rate will go down by a factor 1/e, when the bird density is 1/1000 bird/m^{2}, or 1 bird/1000 m^{2}. This is a really low bird density, where the distance between the birds is more than 30 metres. Kleptoparasitism involves stealing prey items from each other. It is unlikely that a competitor can steal a prey item from a bird that is more than 30 metres away, therefore an interference effect that can be described with an exponential interference factor of k=1000 is unlikely to describe a mechanism of kleptoparasitism.

- The assumption of the ideal-free distribution is that birds are both ‘ideal’ and ‘free’. Are these good assumptions for an Oystercatcher? How do you expect the Oystercatcher distribution will change if birds do not behave ideal? And if they do not behave free?

Both the ideal and free assumptions are approximations of reality. For birds to be ideal they need to know the location and density of all prey items - it is unlikely that Oystercatchers are capable of doing so for an area as large as the Balgzand. Oystercatchers are continuously searching for prey items, which makes them also visit the poorer patches. Therefore non-ideal behaviour leads to spatial distributions that are more spread out than predicted by the ideal-free model of intake rate maximising birds.

Oystercatchers are also not completely free, since flight costs are involved for moving between the roosts and the feeding areas on the mudflats. Mudflats very far away may therefore not be considered by Oystercatchers, because it takes too much energy to fly there. Non-free behaviour may thus leads to higher bird densities at feeding areas closer to the high tide roosts, than predicted by the ideal-free model of intake rate maximising birds.