pairing notebooks

This commit is contained in:
Jonathan Taylor
2023-08-20 19:41:01 -07:00
parent c82e9d5067
commit 058e89ef1c
22 changed files with 489 additions and 346 deletions

View File

@@ -1,11 +1,24 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 2
# Lab: Introduction to Python
## Getting Started
@@ -61,21 +74,21 @@ inputs. For example, the
print('fit a model with', 11, 'variables')
```
The following command will provide information about the `print()` function.
```{python}
print?
# print?
```
Adding two integers in `Python` is pretty intuitive.
```{python}
3 + 5
```
In `Python`, textual data is handled using
*strings*. For instance, `"hello"` and
`'hello'`
@@ -86,7 +99,7 @@ We can concatenate them using the addition `+` symbol.
"hello" + " " + "world"
```
A string is actually a type of *sequence*: this is a generic term for an ordered list.
The three most important types of sequences are lists, tuples, and strings.
We introduce lists now.
@@ -102,7 +115,7 @@ x = [3, 4, 5]
x
```
Note that we used the brackets
`[]` to construct this list.
@@ -114,14 +127,14 @@ y = [4, 9, 7]
x + y
```
The result may appear slightly counterintuitive: why did `Python` not add the entries of the lists
element-by-element?
In `Python`, lists hold *arbitrary* objects, and are added using *concatenation*.
In fact, concatenation is the behavior that we saw earlier when we entered `"hello" + " " + "world"`.
This example reflects the fact that
`Python` is a general-purpose programming language. Much of `Python`'s data-specific
functionality comes from other packages, notably `numpy`
@@ -136,8 +149,8 @@ See [docs.scipy.org/doc/numpy/user/quickstart.html](https://docs.scipy.org/doc/n
As mentioned earlier, this book makes use of functionality that is contained in the `numpy`
*library*, or *package*. A package is a collection of modules that are not necessarily included in
the base `Python` distribution. The name `numpy` is an abbreviation for *numerical Python*.
To access `numpy`, we must first `import` it.
```{python}
@@ -181,7 +194,7 @@ x
The object `x` has several
*attributes*, or associated objects. To access an attribute of `x`, we type `x.attribute`, where we replace `attribute`
@@ -191,7 +204,7 @@ For instance, we can access the `ndim` attribute of `x` as follows.
```{python}
x.ndim
```
The output indicates that `x` is a two-dimensional array.
Similarly, `x.dtype` is the *data type* attribute of the object `x`. This indicates that `x` is
comprised of 64-bit integers:
@@ -215,7 +228,7 @@ documentation associated with the function `fun`, if it exists.
We can try this for `np.array()`.
```{python}
np.array?
# np.array?
```
This documentation indicates that we could create a floating point array by passing a `dtype` argument into `np.array()`.
@@ -233,7 +246,7 @@ at its `shape` attribute.
x.shape
```
A *method* is a function that is associated with an
object.
@@ -270,10 +283,10 @@ x_reshape = x.reshape((2, 3))
print('reshaped x:\n', x_reshape)
```
The previous output reveals that `numpy` arrays are specified as a sequence
of *rows*. This is called *row-major ordering*, as opposed to *column-major ordering*.
`Python` (and hence `numpy`) uses 0-based
indexing. This means that to access the top left element of `x_reshape`,
@@ -303,13 +316,13 @@ print('x_reshape after we modify its top left element:\n', x_reshape)
print('x after we modify top left element of x_reshape:\n', x)
```
Modifying `x_reshape` also modified `x` because the two objects occupy the same space in memory.
We just saw that we can modify an element of an array. Can we also modify a tuple? It turns out that we cannot --- and trying to do so introduces
an *exception*, or error.
@@ -318,8 +331,8 @@ my_tuple = (3, 4, 5)
my_tuple[0] = 2
```
We now briefly mention some attributes of arrays that will come in handy. An array's `shape` attribute contains its dimension; this is always a tuple.
The `ndim` attribute yields the number of dimensions, and `T` provides its transpose.
@@ -327,7 +340,7 @@ The `ndim` attribute yields the number of dimensions, and `T` provides its tran
x_reshape.shape, x_reshape.ndim, x_reshape.T
```
Notice that the three individual outputs `(2,3)`, `2`, and `array([[5, 4],[2, 5], [3,6]])` are themselves output as a tuple.
We will often want to apply functions to arrays.
@@ -338,22 +351,22 @@ square root of the entries using the `np.sqrt()` function:
np.sqrt(x)
```
We can also square the elements:
```{python}
x**2
```
We can compute the square roots using the same notation, raising to the power of $1/2$ instead of 2.
```{python}
x**0.5
```
Throughout this book, we will often want to generate random data.
The `np.random.normal()` function generates a vector of random
normal variables. We can learn more about this function by looking at the help page, via a call to `np.random.normal?`.
@@ -370,7 +383,7 @@ x = np.random.normal(size=50)
x
```
We create an array `y` by adding an independent $N(50,1)$ random variable to each element of `x`.
```{python}
@@ -382,7 +395,7 @@ correlation between `x` and `y`.
```{python}
np.corrcoef(x, y)
```
If you're following along in your own `Jupyter` notebook, then you probably noticed that you got a different set of results when you ran the past few
commands. In particular,
each
@@ -395,7 +408,7 @@ print(np.random.normal(scale=5, size=2))
```
In order to ensure that our code provides exactly the same results
each time it is run, we can set a *random seed*
using the
@@ -411,7 +424,7 @@ print(rng.normal(scale=5, size=2))
rng2 = np.random.default_rng(1303)
print(rng2.normal(scale=5, size=2))
```
Throughout the labs in this book, we use `np.random.default_rng()` whenever we
perform calculations involving random quantities within `numpy`. In principle, this
should enable the reader to exactly reproduce the stated results. However, as new versions of `numpy` become available, it is possible
@@ -434,7 +447,7 @@ np.mean(y), y.mean()
```{python}
np.var(y), y.var(), np.mean((y - y.mean())**2)
```
Notice that by default `np.var()` divides by the sample size $n$ rather
than $n-1$; see the `ddof` argument in `np.var?`.
@@ -443,7 +456,7 @@ than $n-1$; see the `ddof` argument in `np.var?`.
```{python}
np.sqrt(np.var(y)), np.std(y)
```
The `np.mean()`, `np.var()`, and `np.std()` functions can also be applied to the rows and columns of a matrix.
To see this, we construct a $10 \times 3$ matrix of $N(0,1)$ random variables, and consider computing its row sums.
@@ -457,14 +470,14 @@ Since arrays are row-major ordered, the first axis, i.e. `axis=0`, refers to its
```{python}
X.mean(axis=0)
```
The following yields the same result.
```{python}
X.mean(0)
```
## Graphics
In `Python`, common practice is to use the library
@@ -530,7 +543,7 @@ As an alternative, we could use the `ax.scatter()` function to create a scatter
fig, ax = subplots(figsize=(8, 8))
ax.scatter(x, y, marker='o');
```
Notice that in the code blocks above, we have ended
the last line with a semicolon. This prevents `ax.plot(x, y)` from printing
text to the notebook. However, it does not prevent a plot from being produced.
@@ -571,7 +584,7 @@ fig.set_size_inches(12,3)
fig
```
Occasionally we will want to create several plots within a figure. This can be
achieved by passing additional arguments to `subplots()`.
@@ -600,8 +613,8 @@ Type `subplots?` to learn more about
To save the output of `fig`, we call its `savefig()`
method. The argument `dpi` is the dots per inch, used
to determine how large the figure will be in pixels.
@@ -611,7 +624,7 @@ fig.savefig("Figure.png", dpi=400)
fig.savefig("Figure.pdf", dpi=200);
```
We can continue to modify `fig` using step-by-step updates; for example, we can modify the range of the $x$-axis, re-save the figure, and even re-display it.
@@ -663,7 +676,7 @@ fig, ax = subplots(figsize=(8, 8))
ax.imshow(f);
```
## Sequences and Slice Notation
@@ -677,8 +690,8 @@ seq1 = np.linspace(0, 10, 11)
seq1
```
The function `np.arange()`
returns a sequence of numbers spaced out by `step`. If `step` is not specified, then a default value of $1$ is used. Let's create a sequence
that starts at $0$ and ends at $10$.
@@ -688,7 +701,7 @@ seq2 = np.arange(0, 10)
seq2
```
Why isn't $10$ output above? This has to do with *slice* notation in `Python`.
Slice notation
is used to index sequences such as lists, tuples and arrays.
@@ -730,7 +743,7 @@ See the documentation `slice?` for useful options in creating slices.
## Indexing Data
To begin, we create a two-dimensional `numpy` array.
@@ -740,7 +753,7 @@ A = np.array(np.arange(16)).reshape((4, 4))
A
```
Typing `A[1,2]` retrieves the element corresponding to the second row and third
column. (As usual, `Python` indexes from $0.$)
@@ -748,7 +761,7 @@ column. (As usual, `Python` indexes from $0.$)
A[1,2]
```
The first number after the open-bracket symbol `[`
refers to the row, and the second number refers to the column.
@@ -760,7 +773,7 @@ The first number after the open-bracket symbol `[`
A[[1,3]]
```
To select the first and third columns, we pass in `[0,2]` as the second argument in the square brackets.
In this case we need to supply the first argument `:`
which selects all rows.
@@ -769,7 +782,7 @@ which selects all rows.
A[:,[0,2]]
```
Now, suppose that we want to select the submatrix made up of the second and fourth
rows as well as the first and third columns. This is where
indexing gets slightly tricky. It is natural to try to use lists to retrieve the rows and columns:
@@ -778,21 +791,21 @@ indexing gets slightly tricky. It is natural to try to use lists to retrieve th
A[[1,3],[0,2]]
```
Oops --- what happened? We got a one-dimensional array of length two identical to
```{python}
np.array([A[1,0],A[3,2]])
```
Similarly, the following code fails to extract the submatrix comprised of the second and fourth rows and the first, third, and fourth columns:
```{python}
A[[1,3],[0,2,3]]
```
We can see what has gone wrong here. When supplied with two indexing lists, the `numpy` interpretation is that these provide pairs of $i,j$ indices for a series of entries. That is why the pair of lists must have the same length. However, that was not our intent, since we are looking for a submatrix.
One easy way to do this is as follows. We first create a submatrix by subsetting the rows of `A`, and then on the fly we make a further submatrix by subsetting its columns.
@@ -803,7 +816,7 @@ A[[1,3]][:,[0,2]]
```
There are more efficient ways of achieving the same result.
@@ -815,7 +828,7 @@ idx = np.ix_([1,3],[0,2,3])
A[idx]
```
Alternatively, we can subset matrices efficiently using slices.
@@ -829,7 +842,7 @@ A[1:4:2,0:3:2]
```
Why are we able to retrieve a submatrix directly using slices but not using lists?
Its because they are different `Python` types, and
are treated differently by `numpy`.
@@ -845,7 +858,7 @@ Slices can be used to extract objects from arbitrary sequences, such as strings,
### Boolean Indexing
In `numpy`, a *Boolean* is a type that equals either `True` or `False` (also represented as $1$ and $0$, respectively).
@@ -862,7 +875,7 @@ keep_rows[[1,3]] = True
keep_rows
```
Note that the elements of `keep_rows`, when viewed as integers, are the same as the
values of `np.array([0,1,0,1])`. Below, we use `==` to verify their equality. When
applied to two arrays, the `==` operation is applied elementwise.
@@ -871,7 +884,7 @@ applied to two arrays, the `==` operation is applied elementwise.
np.all(keep_rows == np.array([0,1,0,1]))
```
(Here, the function `np.all()` has checked whether
all entries of an array are `True`. A similar function, `np.any()`, can be used to check whether any entries of an array are `True`.)
@@ -883,14 +896,14 @@ The former retrieves the first, second, first, and second rows of `A`.
A[np.array([0,1,0,1])]
```
By contrast, `keep_rows` retrieves only the second and fourth rows of `A` --- i.e. the rows for which the Boolean equals `TRUE`.
```{python}
A[keep_rows]
```
This example shows that Booleans and integers are treated differently by `numpy`.
@@ -914,7 +927,7 @@ A[idx_mixed]
```
For more details on indexing in `numpy`, readers are referred
to the `numpy` tutorial mentioned earlier.
@@ -967,7 +980,7 @@ files. Before loading data into `Python`, it is a good idea to view it using
a text editor or other software, such as Microsoft Excel.
We now take a look at the column of `Auto` corresponding to the variable `horsepower`:
@@ -988,7 +1001,7 @@ We see the culprit is the value `?`, which is being used to encode missing value
To fix the problem, we must provide `pd.read_csv()` with an argument called `na_values`.
Now, each instance of `?` in the file is replaced with the
value `np.nan`, which means *not a number*:
@@ -1000,8 +1013,8 @@ Auto = pd.read_csv('Auto.data',
Auto['horsepower'].sum()
```
The `Auto.shape` attribute tells us that the data has 397
observations, or rows, and nine variables, or columns.
@@ -1009,7 +1022,7 @@ observations, or rows, and nine variables, or columns.
Auto.shape
```
There are
various ways to deal with missing data.
In this case, since only five of the rows contain missing
@@ -1020,7 +1033,7 @@ Auto_new = Auto.dropna()
Auto_new.shape
```
### Basics of Selecting Rows and Columns
@@ -1031,7 +1044,7 @@ Auto = Auto_new # overwrite the previous value
Auto.columns
```
Accessing the rows and columns of a data frame is similar, but not identical, to accessing the rows and columns of an array.
Recall that the first argument to the `[]` method
@@ -1316,8 +1329,8 @@ Auto.plot.scatter('horsepower', 'mpg', ax=axes[1]);
```
Note also that the columns of a data frame can be accessed as attributes: try typing in `Auto.horsepower`.
We now consider the `cylinders` variable. Typing in `Auto.cylinders.dtype` reveals that it is being treated as a quantitative variable.
However, since there is only a small number of possible values for this variable, we may wish to treat it as
qualitative. Below, we replace
@@ -1336,7 +1349,7 @@ fig, ax = subplots(figsize=(8, 8))
Auto.boxplot('mpg', by='cylinders', ax=ax);
```
The `hist()` method can be used to plot a *histogram*.
```{python}

View File

@@ -8278,8 +8278,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,3 +1,16 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 3
@@ -14,7 +27,7 @@ import pandas as pd
from matplotlib.pyplot import subplots
```
### New imports
Throughout this lab we will introduce new functions and libraries. However,
@@ -90,7 +103,7 @@ A.sum()
```
## Simple Linear Regression
In this section we will construct model
@@ -112,7 +125,7 @@ Boston = load_data("Boston")
Boston.columns
```
Type `Boston?` to find out more about these data.
We start by using the `sm.OLS()` function to fit a
@@ -127,7 +140,7 @@ X = pd.DataFrame({'intercept': np.ones(Boston.shape[0]),
X[:4]
```
We extract the response, and fit the model.
```{python}
@@ -149,7 +162,7 @@ method, and returns such a summary.
summarize(results)
```
Before we describe other methods for working with fitted models, we outline a more useful and general framework for constructing a model matrix~`X`.
### Using Transformations: Fit and Transform
@@ -220,8 +233,8 @@ The fitted coefficients can also be retrieved as the
results.params
```
The `get_prediction()` method can be used to obtain predictions, and produce confidence intervals and
prediction intervals for the prediction of `medv` for given values of `lstat`.
@@ -391,7 +404,7 @@ terms = Boston.columns.drop('medv')
terms
```
We can now fit the model with all the variables in `terms` using
the same model matrix builder.
@@ -402,7 +415,7 @@ results = model.fit()
summarize(results)
```
What if we would like to perform a regression using all of the variables but one? For
example, in the above regression output, `age` has a high $p$-value.
So we may wish to run a regression excluding this predictor.
@@ -477,7 +490,7 @@ model2 = sm.OLS(y, X)
summarize(model2.fit())
```
## Non-linear Transformations of the Predictors
The model matrix builder can include terms beyond
@@ -552,7 +565,7 @@ there is little discernible pattern in the residuals.
In order to create a cubic or higher-degree polynomial fit, we can simply change the degree argument
to `poly()`.
## Qualitative Predictors
Here we use the `Carseats` data, which is included in the

View File

@@ -2967,8 +2967,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,3 +1,16 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 4

View File

@@ -5031,8 +5031,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,3 +1,16 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 5

View File

@@ -1279,8 +1279,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,3 +1,16 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 6
@@ -32,7 +45,7 @@ from ISLP.models import \
(Stepwise,
sklearn_selected,
sklearn_selection_path)
!pip install l0bnb
# !pip install l0bnb
from l0bnb import fit_path
```
@@ -61,7 +74,7 @@ Hitters = load_data('Hitters')
np.isnan(Hitters['Salary']).sum()
```
We see that `Salary` is missing for 59 players. The
`dropna()` method of data frames removes all of the rows that have missing
values in any variable (by default --- see `Hitters.dropna?`).
@@ -71,8 +84,8 @@ Hitters = Hitters.dropna();
Hitters.shape
```
We first choose the best model using forward selection based on $C_p$ (6.2). This score
is not built in as a metric to `sklearn`. We therefore define a function to compute it ourselves, and use
it as a scorer. By default, `sklearn` tries to maximize a score, hence
@@ -106,7 +119,7 @@ neg_Cp = partial(nCp, sigma2)
```
We can now use `neg_Cp()` as a scorer for model selection.
Along with a score we need to specify the search strategy. This is done through the object
`Stepwise()` in the `ISLP.models` package. The method `Stepwise.first_peak()`
@@ -120,7 +133,7 @@ strategy = Stepwise.first_peak(design,
max_terms=len(design.terms))
```
We now fit a linear regression model with `Salary` as outcome using forward
selection. To do so, we use the function `sklearn_selected()` from the `ISLP.models` package. This takes
a model from `statsmodels` along with a search strategy and selects a model with its
@@ -134,7 +147,7 @@ hitters_MSE.fit(Hitters, Y)
hitters_MSE.selected_state_
```
Using `neg_Cp` results in a smaller model, as expected, with just 10 variables selected.
```{python}
@@ -145,7 +158,7 @@ hitters_Cp.fit(Hitters, Y)
hitters_Cp.selected_state_
```
### Choosing Among Models Using the Validation Set Approach and Cross-Validation
As an alternative to using $C_p$, we might try cross-validation to select a model in forward selection. For this, we need a
@@ -167,7 +180,7 @@ strategy = Stepwise.fixed_steps(design,
full_path = sklearn_selection_path(OLS, strategy)
```
We now fit the full forward-selection path on the `Hitters` data and compute the fitted values.
```{python}
@@ -176,8 +189,8 @@ Yhat_in = full_path.predict(Hitters)
Yhat_in.shape
```
This gives us an array of fitted values --- 20 steps in all, including the fitted mean for the null model --- which we can use to evaluate
in-sample MSE. As expected, the in-sample MSE improves each step we take,
indicating we must use either the validation or cross-validation
@@ -266,7 +279,7 @@ ax.legend()
mse_fig
```
To repeat the above using the validation set approach, we simply change our
`cv` argument to a validation set: one random split of the data into a test and training. We choose a test size
of 20%, similar to the size of each test set in 5-fold cross-validation.`skm.ShuffleSplit()`
@@ -296,7 +309,7 @@ ax.legend()
mse_fig
```
### Best Subset Selection
Forward stepwise is a *greedy* selection procedure; at each step it augments the current set by including one additional variable. We now apply best subset selection to the `Hitters`
@@ -324,7 +337,7 @@ path = fit_path(X,
max_nonzeros=X.shape[1])
```
The function `fit_path()` returns a list whose values include the fitted coefficients as `B`, an intercept as `B0`, as well as a few other attributes related to the particular path algorithm used. Such details are beyond the scope of this book.
```{python}
@@ -392,7 +405,7 @@ soln_path.index.name = 'negative log(lambda)'
soln_path
```
We plot the paths to get a sense of how the coefficients vary with $\lambda$.
To control the location of the legend we first set `legend` to `False` in the
plot method, adding it afterward with the `legend()` method of `ax`.
@@ -416,14 +429,14 @@ beta_hat = soln_path.loc[soln_path.index[39]]
lambdas[39], beta_hat
```
Lets compute the $\ell_2$ norm of the standardized coefficients.
```{python}
np.linalg.norm(beta_hat)
```
In contrast, here is the $\ell_2$ norm when $\lambda$ is 2.44e-01.
Note the much larger $\ell_2$ norm of the
coefficients associated with this smaller value of $\lambda$.
@@ -477,7 +490,7 @@ results = skm.cross_validate(ridge,
-results['test_score']
```
The test MSE is 1.342e+05. Note
that if we had instead simply fit a model with just an intercept, we
would have predicted each test observation using the mean of the
@@ -514,7 +527,7 @@ grid.best_params_['ridge__alpha']
grid.best_estimator_
```
Alternatively, we can use 5-fold cross-validation.
```{python}
@@ -540,7 +553,7 @@ ax.set_xlabel('$-\log(\lambda)$', fontsize=20)
ax.set_ylabel('Cross-validated MSE', fontsize=20);
```
One can cross-validate different metrics to choose a parameter. The default
metric for `skl.ElasticNet()` is test $R^2$.
Lets compare $R^2$ to MSE for cross-validation here.
@@ -552,7 +565,7 @@ grid_r2 = skm.GridSearchCV(pipe,
grid_r2.fit(X, Y)
```
Finally, lets plot the results for cross-validated $R^2$.
```{python}
@@ -564,7 +577,7 @@ ax.set_xlabel('$-\log(\lambda)$', fontsize=20)
ax.set_ylabel('Cross-validated $R^2$', fontsize=20);
```
### Fast Cross-Validation for Solution Paths
The ridge, lasso, and elastic net can be efficiently fit along a sequence of $\lambda$ values, creating what is known as a *solution path* or *regularization path*. Hence there is specialized code to fit
@@ -584,7 +597,7 @@ pipeCV = Pipeline(steps=[('scaler', scaler),
pipeCV.fit(X, Y)
```
Lets produce a plot again of the cross-validation error to see that
it is similar to using `skm.GridSearchCV`.
@@ -600,7 +613,7 @@ ax.set_xlabel('$-\log(\lambda)$', fontsize=20)
ax.set_ylabel('Cross-validated MSE', fontsize=20);
```
We see that the value of $\lambda$ that results in the
smallest cross-validation error is 1.19e-02, available
as the value `tuned_ridge.alpha_`. What is the test MSE
@@ -610,7 +623,7 @@ associated with this value of $\lambda$?
np.min(tuned_ridge.mse_path_.mean(1))
```
This represents a further improvement over the test MSE that we got
using $\lambda=4$. Finally, `tuned_ridge.coef_`
has the coefficients fit on the entire data set
@@ -666,7 +679,7 @@ results = skm.cross_validate(pipeCV,
```
### The Lasso
We saw that ridge regression with a wise choice of $\lambda$ can
@@ -721,7 +734,7 @@ regression (page 305) with $\lambda$ chosen by cross-validation.
np.min(tuned_lasso.mse_path_.mean(1))
```
Lets again produce a plot of the cross-validation error.
@@ -746,7 +759,7 @@ variables.
tuned_lasso.coef_
```
As in ridge regression, we could evaluate the test error
of cross-validated lasso by first splitting into
test and training sets and internally running
@@ -757,7 +770,7 @@ this as an exercise.
## PCR and PLS Regression
### Principal Components Regression
Principal components regression (PCR) can be performed using
`PCA()` from the `sklearn.decomposition`
@@ -778,7 +791,7 @@ pipe.fit(X, Y)
pipe.named_steps['linreg'].coef_
```
When performing PCA, the results vary depending
on whether the data has been *standardized* or not.
As in the earlier examples, this can be accomplished
@@ -792,7 +805,7 @@ pipe.fit(X, Y)
pipe.named_steps['linreg'].coef_
```
We can of course use CV to choose the number of components, by
using `skm.GridSearchCV`, in this
case fixing the parameters to vary the
@@ -807,7 +820,7 @@ grid = skm.GridSearchCV(pipe,
grid.fit(X, Y)
```
Lets plot the results as we have for other methods.
```{python}
@@ -822,7 +835,7 @@ ax.set_xticks(n_comp[::2])
ax.set_ylim([50000,250000]);
```
We see that the smallest cross-validation error occurs when
17
components are used. However, from the plot we also see that the
@@ -846,8 +859,8 @@ cv_null = skm.cross_validate(linreg,
-cv_null['test_score'].mean()
```
The `explained_variance_ratio_`
attribute of our `PCA` object provides the *percentage of variance explained* in the predictors and in the response using
different numbers of components. This concept is discussed in greater
@@ -857,7 +870,7 @@ detail in Section 12.2.
pipe.named_steps['pca'].explained_variance_ratio_
```
Briefly, we can think of
this as the amount of information about the predictors
that is captured using $M$ principal components. For example, setting
@@ -880,7 +893,7 @@ pls = PLSRegression(n_components=2,
pls.fit(X, Y)
```
As was the case in PCR, we will want to
use CV to choose the number of components.
@@ -893,7 +906,7 @@ grid = skm.GridSearchCV(pls,
grid.fit(X, Y)
```
As for our other methods, we plot the MSE.
```{python}
@@ -908,7 +921,7 @@ ax.set_xticks(n_comp[::2])
ax.set_ylim([50000,250000]);
```
CV error is minimized at 12,
though there is little noticable difference between this point and a much lower number like 2 or 3 components.

View File

@@ -9815,8 +9815,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,3 +1,16 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 7
@@ -17,7 +30,7 @@ from ISLP.models import (summarize,
ModelSpec as MS)
from statsmodels.stats.anova import anova_lm
```
We again collect the new imports
needed for this lab. Many of these are developed specifically for the
`ISLP` package.
@@ -38,7 +51,7 @@ from ISLP.pygam import (approx_lam,
anova as anova_gam)
```
## Polynomial Regression and Step Functions
We start by demonstrating how Figure 7.1 can be reproduced.
Let's begin by loading the data.
@@ -49,7 +62,7 @@ y = Wage['wage']
age = Wage['age']
```
Throughout most of this lab, our response is `Wage['wage']`, which
we have stored as `y` above.
As in Section 3.6.6, we will use the `poly()` function to create a model matrix
@@ -61,8 +74,8 @@ M = sm.OLS(y, poly_age.transform(Wage)).fit()
summarize(M)
```
This polynomial is constructed using the function `poly()`,
which creates
a special *transformer* `Poly()` (using `sklearn` terminology
@@ -83,7 +96,7 @@ on the second line, as well as in the plotting function developed below.
We now create a grid of values for `age` at which we want
predictions.
@@ -151,7 +164,7 @@ plot_wage_fit(age_df,
With polynomial regression we must decide on the degree of
the polynomial to use. Sometimes we just wing it, and decide to use
second or third degree polynomials, simply to obtain a nonlinear fit. But we can
@@ -182,7 +195,7 @@ anova_lm(*[sm.OLS(y, X_).fit()
for X_ in Xs])
```
Notice the `*` in the `anova_lm()` line above. This
function takes a variable number of non-keyword arguments, in this case fitted models.
When these models are provided as a list (as is done here), it must be
@@ -207,8 +220,8 @@ that `poly()` creates orthogonal polynomials.
summarize(M)
```
Notice that the p-values are the same, and in fact the square of
the t-statistics are equal to the F-statistics from the
`anova_lm()` function; for example:
@@ -217,8 +230,8 @@ the t-statistics are equal to the F-statistics from the
(-11.983)**2
```
However, the ANOVA method works whether or not we used orthogonal
polynomials, provided the models are nested. For example, we can use
`anova_lm()` to compare the following three
@@ -233,8 +246,8 @@ XEs = [model.fit_transform(Wage)
anova_lm(*[sm.OLS(y, X_).fit() for X_ in XEs])
```
As an alternative to using hypothesis tests and ANOVA, we could choose
the polynomial degree using cross-validation, as discussed in Chapter 5.
@@ -254,8 +267,8 @@ B = glm.fit()
summarize(B)
```
Once again, we make predictions using the `get_prediction()` method.
```{python}
@@ -264,7 +277,7 @@ preds = B.get_prediction(newX)
bands = preds.conf_int(alpha=0.05)
```
We now plot the estimated relationship.
```{python}
@@ -306,8 +319,8 @@ cut_age = pd.qcut(age, 4)
summarize(sm.OLS(y, pd.get_dummies(cut_age)).fit())
```
Here `pd.qcut()` automatically picked the cutpoints based on the quantiles 25%, 50% and 75%, which results in four regions. We could also have specified our own
quantiles directly instead of the argument `4`. For cuts not based
on quantiles we would use the `pd.cut()` function.
@@ -364,7 +377,7 @@ M = sm.OLS(y, Xbs).fit()
summarize(M)
```
Notice that there are 6 spline coefficients rather than 7. This is because, by default,
`bs()` assumes `intercept=False`, since we typically have an overall intercept in the model.
So it generates the spline basis with the given knots, and then discards one of the basis functions to account for the intercept.
@@ -422,7 +435,7 @@ deciding bin membership.
In order to fit a natural spline, we use the `NaturalSpline()`
transform with the corresponding helper `ns()`. Here we fit a natural spline with five
degrees of freedom (excluding the intercept) and plot the results.
@@ -440,7 +453,7 @@ plot_wage_fit(age_df,
'Natural spline, df=5');
```
## Smoothing Splines and GAMs
A smoothing spline is a special case of a GAM with squared-error loss
and a single feature. To fit GAMs in `Python` we will use the
@@ -459,7 +472,7 @@ gam = LinearGAM(s_gam(0, lam=0.6))
gam.fit(X_age, y)
```
The `pygam` library generally expects a matrix of features so we reshape `age` to be a matrix (a two-dimensional array) instead
of a vector (i.e. a one-dimensional array). The `-1` in the call to the `reshape()` method tells `numpy` to impute the
size of that dimension based on the remaining entries of the shape tuple.
@@ -482,7 +495,7 @@ ax.set_ylabel('Wage', fontsize=20);
ax.legend(title='$\lambda$');
```
The `pygam` package can perform a search for an optimal smoothing parameter.
```{python}
@@ -495,7 +508,7 @@ ax.legend()
fig
```
Alternatively, we can fix the degrees of freedom of the smoothing
spline using a function included in the `ISLP.pygam` package. Below we
find a value of $\lambda$ that gives us roughly four degrees of
@@ -510,8 +523,8 @@ age_term.lam = lam_4
degrees_of_freedom(X_age, age_term)
```
Lets vary the degrees of freedom in a similar plot to above. We choose the degrees of freedom
as the desired degrees of freedom plus one to account for the fact that these smoothing
splines always have an intercept term. Hence, a value of one for `df` is just a linear fit.
@@ -623,7 +636,7 @@ ax.set_ylabel('Effect on wage')
ax.set_title('Partial dependence of year on wage', fontsize=20);
```
We now fit the model (7.16) using smoothing splines rather
than natural splines. All of the
terms in (7.16) are fit simultaneously, taking each other
@@ -715,7 +728,7 @@ gam_linear = LinearGAM(age_term +
gam_linear.fit(Xgam, y)
```
Notice our use of `age_term` in the expressions above. We do this because
earlier we set the value for `lam` in this term to achieve four degrees of freedom.
@@ -762,7 +775,7 @@ We can make predictions from `gam` objects, just like from
Yhat = gam_full.predict(Xgam)
```
In order to fit a logistic regression GAM, we use `LogisticGAM()`
from `pygam`.
@@ -773,7 +786,7 @@ gam_logit = LogisticGAM(age_term +
gam_logit.fit(Xgam, high_earn)
```
```{python}
fig, ax = subplots(figsize=(8, 8))
@@ -825,8 +838,8 @@ gam_logit_ = LogisticGAM(age_term +
gam_logit_.fit(Xgam_, high_earn_)
```
Lets look at the effect of `education`, `year` and `age` on high earner status now that weve
removed those observations.
@@ -859,7 +872,7 @@ ax.set_ylabel('Effect on wage')
ax.set_title('Partial dependence of high earner status on age', fontsize=20);
```
## Local Regression
We illustrate the use of local regression using the `lowess()`

View File

@@ -3222,8 +3222,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,3 +1,16 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 8
@@ -33,10 +46,10 @@ from sklearn.ensemble import \
from ISLP.bart import BART
```
## Fitting Classification Trees
We first use classification trees to analyze the `Carseats` data set.
In these data, `Sales` is a continuous variable, and so we begin
@@ -52,7 +65,7 @@ High = np.where(Carseats.Sales > 8,
"No")
```
We now use `DecisionTreeClassifier()` to fit a classification tree in
order to predict `High` using all variables but `Sales`.
To do so, we must form a model matrix as we did when fitting regression
@@ -80,8 +93,8 @@ clf = DTC(criterion='entropy',
clf.fit(X, High)
```
In our discussion of qualitative features in Section 3.3,
we noted that for a linear regression model such a feature could be
represented by including a matrix of dummy variables (one-hot-encoding) in the model
@@ -97,8 +110,8 @@ advantage of this approach; instead it simply treats the one-hot-encoded levels
accuracy_score(High, clf.predict(X))
```
With only the default arguments, the training error rate is
21%.
For classification trees, we can
@@ -116,7 +129,7 @@ resid_dev = np.sum(log_loss(High, clf.predict_proba(X)))
resid_dev
```
This is closely related to the *entropy*, defined in (8.7).
A small deviance indicates a
tree that provides a good fit to the (training) data.
@@ -148,7 +161,7 @@ print(export_text(clf,
show_weights=True))
```
In order to properly evaluate the performance of a classification tree
on these data, we must estimate the test error rather than simply
computing the training error. We split the observations into a
@@ -251,8 +264,8 @@ confusion = confusion_table(best_.predict(X_test),
confusion
```
Now 72.0% of the test observations are correctly classified, which is slightly worse than the error for the full tree (with 35 leaves). So cross-validation has not helped us much here; it only pruned off 5 leaves, at a cost of a slightly worse error. These results would change if we were to change the random number seeds above; even though cross-validation gives an unbiased approach to model selection, it does have variance.
@@ -270,7 +283,7 @@ feature_names = list(D.columns)
X = np.asarray(D)
```
First, we split the data into training and test sets, and fit the tree
to the training data. Here we use 30% of the data for the test set.
@@ -285,7 +298,7 @@ to the training data. Here we use 30% of the data for the test set.
random_state=0)
```
Having formed our training and test data sets, we fit the regression tree.
```{python}
@@ -297,7 +310,7 @@ plot_tree(reg,
ax=ax);
```
The variable `lstat` measures the percentage of individuals with
lower socioeconomic status. The tree indicates that lower
values of `lstat` correspond to more expensive houses.
@@ -321,7 +334,7 @@ grid = skm.GridSearchCV(reg,
G = grid.fit(X_train, y_train)
```
In keeping with the cross-validation results, we use the pruned tree
to make predictions on the test set.
@@ -330,8 +343,8 @@ best_ = grid.best_estimator_
np.mean((y_test - best_.predict(X_test))**2)
```
In other words, the test set MSE associated with the regression tree
is 28.07. The square root of
the MSE is therefore around
@@ -354,7 +367,7 @@ plot_tree(G.best_estimator_,
## Bagging and Random Forests
Here we apply bagging and random forests to the `Boston` data, using
the `RandomForestRegressor()` from the `sklearn.ensemble` package. Recall
@@ -367,8 +380,8 @@ bag_boston = RF(max_features=X_train.shape[1], random_state=0)
bag_boston.fit(X_train, y_train)
```
The argument `max_features` indicates that all 12 predictors should
be considered for each split of the tree --- in other words, that
bagging should be done. How well does this bagged model perform on
@@ -381,7 +394,7 @@ ax.scatter(y_hat_bag, y_test)
np.mean((y_test - y_hat_bag)**2)
```
The test set MSE associated with the bagged regression tree is
14.63, about half that obtained using an optimally-pruned single
tree. We could change the number of trees grown from the default of
@@ -412,8 +425,8 @@ y_hat_RF = RF_boston.predict(X_test)
np.mean((y_test - y_hat_RF)**2)
```
The test set MSE is 20.04;
this indicates that random forests did somewhat worse than bagging
in this case. Extracting the `feature_importances_` values from the fitted model, we can view the
@@ -437,7 +450,7 @@ house size (`rm`) are by far the two most important variables.
## Boosting
Here we use `GradientBoostingRegressor()` from `sklearn.ensemble`
to fit boosted regression trees to the `Boston` data
@@ -456,7 +469,7 @@ boost_boston = GBR(n_estimators=5000,
boost_boston.fit(X_train, y_train)
```
We can see how the training error decreases with the `train_score_` attribute.
To get an idea of how the test error decreases we can use the
`staged_predict()` method to get the predicted values along the path.
@@ -479,7 +492,7 @@ ax.plot(plot_idx,
ax.legend();
```
We now use the boosted model to predict `medv` on the test set:
```{python}
@@ -487,7 +500,7 @@ y_hat_boost = boost_boston.predict(X_test);
np.mean((y_test - y_hat_boost)**2)
```
The test MSE obtained is 14.48,
similar to the test MSE for bagging. If we want to, we can
perform boosting with a different value of the shrinkage parameter
@@ -505,8 +518,8 @@ y_hat_boost = boost_boston.predict(X_test);
np.mean((y_test - y_hat_boost)**2)
```
In this case, using $\lambda=0.2$ leads to a almost the same test MSE
as when using $\lambda=0.001$.
@@ -514,7 +527,7 @@ as when using $\lambda=0.001$.
## Bayesian Additive Regression Trees
In this section we demonstrate a `Python` implementation of BART found in the
`ISLP.bart` package. We fit a model
@@ -527,8 +540,8 @@ bart_boston = BART(random_state=0, burnin=5, ndraw=15)
bart_boston.fit(X_train, y_train)
```
On this data set, with this split into test and training, we see that the test error of BART is similar to that of random forest.
```{python}
@@ -536,8 +549,8 @@ yhat_test = bart_boston.predict(X_test.astype(np.float32))
np.mean((y_test - yhat_test)**2)
```
We can check how many times each variable appeared in the collection of trees.
This gives a summary similar to the variable importance plot for boosting and random forests.

View File

@@ -1759,8 +1759,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,8 +1,21 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 9
# Lab: Support Vector Machines
In this lab, we use the `sklearn.svm` library to demonstrate the support
@@ -26,7 +39,7 @@ from ISLP.svm import plot as plot_svm
from sklearn.metrics import RocCurveDisplay
```
We will use the function `RocCurveDisplay.from_estimator()` to
produce several ROC plots, using a shorthand `roc_curve`.
@@ -71,8 +84,8 @@ svm_linear = SVC(C=10, kernel='linear')
svm_linear.fit(X, y)
```
The support vector classifier with two features can
be visualized by plotting values of its *decision function*.
We have included a function for this in the `ISLP` package (inspired by a similar
@@ -86,7 +99,7 @@ plot_svm(X,
ax=ax)
```
The decision
boundary between the two classes is linear (because we used the
argument `kernel='linear'`). The support vectors are marked with `+`
@@ -113,8 +126,8 @@ coefficients of the linear decision boundary as follows:
svm_linear.coef_
```
Since the support vector machine is an estimator in `sklearn`, we
can use the usual machinery to tune it.
@@ -131,8 +144,8 @@ grid.fit(X, y)
grid.best_params_
```
We can easily access the cross-validation errors for each of these models
in `grid.cv_results_`. This prints out a lot of detail, so we
extract the accuracy results only.
@@ -153,7 +166,7 @@ y_test = np.array([-1]*10+[1]*10)
X_test[y_test==1] += 1
```
Now we predict the class labels of these test observations. Here we
use the best model selected by cross-validation in order to make the
predictions.
@@ -164,7 +177,7 @@ y_test_hat = best_.predict(X_test)
confusion_table(y_test_hat, y_test)
```
Thus, with this value of `C`,
70% of the test
observations are correctly classified. What if we had instead used
@@ -177,7 +190,7 @@ y_test_hat = svm_.predict(X_test)
confusion_table(y_test_hat, y_test)
```
In this case 60% of test observations are correctly classified.
We now consider a situation in which the two classes are linearly
@@ -192,7 +205,7 @@ fig, ax = subplots(figsize=(8,8))
ax.scatter(X[:,0], X[:,1], c=y, cmap=cm.coolwarm);
```
Now the observations are just barely linearly separable.
```{python}
@@ -201,7 +214,7 @@ y_hat = svm_.predict(X)
confusion_table(y_hat, y)
```
We fit the
support vector classifier and plot the resulting hyperplane, using a
very large value of `C` so that no observations are
@@ -227,7 +240,7 @@ y_hat = svm_.predict(X)
confusion_table(y_hat, y)
```
Using `C=0.1`, we again do not misclassify any training observations, but we
also obtain a much wider margin and make use of twelve support
vectors. These jointly define the orientation of the decision boundary, and since there are more of them, it is more stable. It seems possible that this model will perform better on test
@@ -241,7 +254,7 @@ plot_svm(X,
ax=ax)
```
## Support Vector Machine
In order to fit an SVM using a non-linear kernel, we once again use
@@ -264,7 +277,7 @@ X[100:150] -= 2
y = np.array([1]*150+[2]*50)
```
Plotting the data makes it clear that the class boundary is indeed non-linear.
```{python}
@@ -275,8 +288,8 @@ ax.scatter(X[:,0],
cmap=cm.coolwarm)
```
The data is randomly split into training and testing groups. We then
fit the training data using the `SVC()` estimator with a
radial kernel and $\gamma=1$:
@@ -293,7 +306,7 @@ svm_rbf = SVC(kernel="rbf", gamma=1, C=1)
svm_rbf.fit(X_train, y_train)
```
The plot shows that the resulting SVM has a decidedly non-linear
boundary.
@@ -305,7 +318,7 @@ plot_svm(X_train,
ax=ax)
```
We can see from the figure that there are a fair number of training
errors in this SVM fit. If we increase the value of `C`, we
can reduce the number of training errors. However, this comes at the
@@ -322,7 +335,7 @@ plot_svm(X_train,
ax=ax)
```
We can perform cross-validation using `skm.GridSearchCV()` to select the
best choice of $\gamma$ and `C` for an SVM with a radial
kernel:
@@ -341,7 +354,7 @@ grid.fit(X_train, y_train)
grid.best_params_
```
The best choice of parameters under five-fold CV is achieved at `C=1`
and `gamma=0.5`, though several other values also achieve the same
value.
@@ -358,7 +371,7 @@ y_hat_test = best_svm.predict(X_test)
confusion_table(y_hat_test, y_test)
```
With these parameters, 12% of test
observations are misclassified by this SVM.
@@ -418,7 +431,7 @@ roc_curve(svm_flex,
ax=ax);
```
However, these ROC curves are all on the training data. We are really
more interested in the level of prediction accuracy on the test
data. When we compute the ROC curves on the test data, the model with
@@ -434,7 +447,7 @@ roc_curve(svm_flex,
fig;
```
Lets look at our tuned SVM.
```{python}
@@ -453,7 +466,7 @@ for (X_, y_, c, name) in zip(
color=c)
```
## SVM with Multiple Classes
If the response is a factor containing more than two levels, then the
@@ -472,7 +485,7 @@ fig, ax = subplots(figsize=(8,8))
ax.scatter(X[:,0], X[:,1], c=y, cmap=cm.coolwarm);
```
We now fit an SVM to the data:
```{python}
@@ -508,7 +521,7 @@ Khan = load_data('Khan')
Khan['xtrain'].shape, Khan['xtest'].shape
```
This data set consists of expression measurements for 2,308
genes. The training and test sets consist of 63 and 20
observations, respectively.
@@ -527,7 +540,7 @@ confusion_table(khan_linear.predict(Khan['xtrain']),
Khan['ytrain'])
```
We see that there are *no* training
errors. In fact, this is not surprising, because the large number of
variables relative to the number of observations implies that it is
@@ -540,7 +553,7 @@ confusion_table(khan_linear.predict(Khan['xtest']),
Khan['ytest'])
```
We see that using `C=10` yields two test set errors on these data.

View File

@@ -1900,8 +1900,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,3 +1,16 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 11
@@ -24,7 +37,7 @@ from ISLP.models import ModelSpec as MS
from ISLP import load_data
```
We also collect the new imports
needed for this lab.
@@ -48,7 +61,7 @@ BrainCancer = load_data('BrainCancer')
BrainCancer.columns
```
The rows index the 88 patients, while the 8 columns contain the predictors and outcome variables.
We first briefly examine the data.
@@ -56,20 +69,20 @@ We first briefly examine the data.
BrainCancer['sex'].value_counts()
```
```{python}
BrainCancer['diagnosis'].value_counts()
```
```{python}
BrainCancer['status'].value_counts()
```
Before beginning an analysis, it is important to know how the
`status` variable has been coded. Most software
uses the convention that a `status` of 1 indicates an
@@ -96,7 +109,7 @@ km_brain = km.fit(BrainCancer['time'], BrainCancer['status'])
km_brain.plot(label='Kaplan Meier estimate', ax=ax)
```
Next we create Kaplan-Meier survival curves that are stratified by
`sex`, in order to reproduce Figure 11.3.
We do this using the `groupby()` method of a dataframe.
@@ -125,7 +138,7 @@ for sex, df in BrainCancer.groupby('sex'):
km_sex.plot(label='Sex=%s' % sex, ax=ax)
```
As discussed in Section 11.4, we can perform a
log-rank test to compare the survival of males to females. We use
the `logrank_test()` function from the `lifelines.statistics` module.
@@ -139,8 +152,8 @@ logrank_test(by_sex['Male']['time'],
by_sex['Female']['status'])
```
The resulting $p$-value is $0.23$, indicating no evidence of a
difference in survival between the two sexes.
@@ -159,7 +172,7 @@ cox_fit = coxph().fit(model_df,
cox_fit.summary[['coef', 'se(coef)', 'p']]
```
The first argument to `fit` should be a data frame containing
at least the event time (the second argument `time` in this case),
as well as an optional censoring variable (the argument `status` in this case).
@@ -173,7 +186,7 @@ with no features as follows:
cox_fit.log_likelihood_ratio_test()
```
Regardless of which test we use, we see that there is no clear
evidence for a difference in survival between males and females. As
we learned in this chapter, the score test from the Cox model is
@@ -193,7 +206,7 @@ fit_all = coxph().fit(all_df,
fit_all.summary[['coef', 'se(coef)', 'p']]
```
The `diagnosis` variable has been coded so that the baseline
corresponds to HG glioma. The results indicate that the risk associated with HG glioma
is more than eight times (i.e. $e^{2.15}=8.62$) the risk associated
@@ -220,7 +233,7 @@ def representative(series):
modal_data = cleaned.apply(representative, axis=0)
```
We make four
copies of the column means and assign the `diagnosis` column to be the four different
diagnoses.
@@ -232,7 +245,7 @@ modal_df['diagnosis'] = levels
modal_df
```
We then construct the model matrix based on the model specification `all_MS` used to fit
the model, and name the rows according to the levels of `diagnosis`.
@@ -259,7 +272,7 @@ fig, ax = subplots(figsize=(8, 8))
predicted_survival.plot(ax=ax);
```
## Publication Data
The `Publication` data presented in Section 11.5.4 can be
@@ -278,7 +291,7 @@ for result, df in Publication.groupby('posres'):
km_result.plot(label='Result=%d' % result, ax=ax)
```
As discussed previously, the $p$-values from fitting Coxs
proportional hazards model to the `posres` variable are quite
large, providing no evidence of a difference in time-to-publication
@@ -295,8 +308,8 @@ posres_fit = coxph().fit(posres_df,
posres_fit.summary[['coef', 'se(coef)', 'p']]
```
However, the results change dramatically when we include other
predictors in the model. Here we exclude the funding mechanism
variable.
@@ -309,7 +322,7 @@ coxph().fit(model.fit_transform(Publication),
'status').summary[['coef', 'se(coef)', 'p']]
```
We see that there are a number of statistically significant variables,
including whether the trial focused on a clinical endpoint, the impact
of the study, and whether the study had positive or negative results.
@@ -359,7 +372,7 @@ model = MS(['Operators',
intercept=False)
X = model.fit_transform(D)
```
It is worthwhile to take a peek at the model matrix `X`, so
that we can be sure that we understand how the variables have been coded. By default,
the levels of categorical variables are sorted and, as usual, the first column of the one-hot encoding
@@ -369,7 +382,7 @@ of the variable is dropped.
X[:5]
```
Next, we specify the coefficients and the hazard function.
```{python}
@@ -418,7 +431,7 @@ W = np.array([sim_time(l, cum_hazard, rng)
D['Wait time'] = np.clip(W, 0, 1000)
```
We now simulate our censoring variable, for which we assume
90% of calls were answered (`Failed==1`) before the
customer hung up (`Failed==0`).
@@ -430,13 +443,13 @@ D['Failed'] = rng.choice([1, 0],
D[:5]
```
```{python}
D['Failed'].mean()
```
We now plot Kaplan-Meier survival curves. First, we stratify by `Center`.
```{python}
@@ -449,7 +462,7 @@ for center, df in D.groupby('Center'):
ax.set_title("Probability of Still Being on Hold")
```
Next, we stratify by `Time`.
```{python}
@@ -462,7 +475,7 @@ for time, df in D.groupby('Time'):
ax.set_title("Probability of Still Being on Hold")
```
It seems that calls at Call Center B take longer to be answered than
calls at Centers A and C. Similarly, it appears that wait times are
longest in the morning and shortest in the evening hours. We can use a
@@ -475,8 +488,8 @@ multivariate_logrank_test(D['Wait time'],
D['Failed'])
```
Next, we consider the effect of `Time`.
```{python}
@@ -485,8 +498,8 @@ multivariate_logrank_test(D['Wait time'],
D['Failed'])
```
As in the case of a categorical variable with 2 levels, these
results are similar to the likelihood ratio test
from the Cox proportional hazards model. First, we
@@ -501,8 +514,8 @@ F = coxph().fit(X, 'Wait time', 'Failed')
F.log_likelihood_ratio_test()
```
Next, we look at the results for `Time`.
```{python}
@@ -514,8 +527,8 @@ F = coxph().fit(X, 'Wait time', 'Failed')
F.log_likelihood_ratio_test()
```
We find that differences between centers are highly significant, as
are differences between times of day.
@@ -531,8 +544,8 @@ fit_queuing = coxph().fit(
fit_queuing.summary[['coef', 'se(coef)', 'p']]
```
The $p$-values for Center B and evening time
are very small. It is also clear that the
hazard --- that is, the instantaneous risk that a call will be

View File

@@ -2703,8 +2703,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,3 +1,16 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 12
@@ -31,7 +44,7 @@ from scipy.cluster.hierarchy import \
from ISLP.cluster import compute_linkage
```
## Principal Components Analysis
In this lab, we perform PCA on `USArrests`, a data set in the
`R` computing environment.
@@ -45,22 +58,22 @@ USArrests = get_rdataset('USArrests').data
USArrests
```
The columns of the data set contain the four variables.
```{python}
USArrests.columns
```
We first briefly examine the data. We notice that the variables have vastly different means.
```{python}
USArrests.mean()
```
Dataframes have several useful methods for computing
column-wise summaries. We can also examine the
variance of the four variables using the `var()` method.
@@ -69,7 +82,7 @@ variance of the four variables using the `var()` method.
USArrests.var()
```
Not surprisingly, the variables also have vastly different variances.
The `UrbanPop` variable measures the percentage of the population
in each state living in an urban area, which is not a comparable
@@ -119,7 +132,7 @@ of the variables. In this case, since we centered and scaled the data with
pcaUS.mean_
```
The scores can be computed using the `transform()` method
of `pcaUS` after it has been fit.
@@ -137,7 +150,7 @@ principal component loading vector.
pcaUS.components_
```
The `biplot` is a common visualization method used with
PCA. It is not built in as a standard
part of `sklearn`, though there are python
@@ -178,14 +191,14 @@ for k in range(pcaUS.components_.shape[1]):
USArrests.columns[k])
```
The standard deviations of the principal component scores are as follows:
```{python}
scores.std(0, ddof=1)
```
The variance of each score can be extracted directly from the `pcaUS` object via
the `explained_variance_` attribute.
@@ -207,7 +220,7 @@ We can plot the PVE explained by each component, as well as the cumulative PVE.
plot the proportion of variance explained.
```{python}
%%capture
# %%capture
fig, axes = plt.subplots(1, 2, figsize=(15, 6))
ticks = np.arange(pcaUS.n_components_)+1
ax = axes[0]
@@ -307,7 +320,7 @@ Xna = X.copy()
Xna[r_idx, c_idx] = np.nan
```
Here the array `r_idx`
contains 20 integers from 0 to 49; this represents the states (rows of `X`) that are selected to contain missing values. And `c_idx` contains
20 integers from 0 to 3, representing the features (columns in `X`) that contain the missing values for each of the selected states.
@@ -335,7 +348,7 @@ Xbar = np.nanmean(Xhat, axis=0)
Xhat[r_idx, c_idx] = Xbar[c_idx]
```
Before we begin Step 2, we set ourselves up to measure the progress of our
iterations:
@@ -374,7 +387,7 @@ while rel_err > thresh:
.format(count, mss, rel_err))
```
We see that after eight iterations, the relative error has fallen below `thresh = 1e-7`, and so the algorithm terminates. When this happens, the mean squared error of the non-missing elements equals 0.381.
Finally, we compute the correlation between the 20 imputed values
@@ -384,8 +397,8 @@ and the actual values:
np.corrcoef(Xapp[ismiss], X[ismiss])[0,1]
```
In this lab, we implemented Algorithm 12.1 ourselves for didactic purposes. However, a reader who wishes to apply matrix completion to their data might look to more specialized `Python` implementations.
@@ -431,7 +444,7 @@ ax.scatter(X[:,0], X[:,1], c=kmeans.labels_)
ax.set_title("K-Means Clustering Results with K=2");
```
Here the observations can be easily plotted because they are
two-dimensional. If there were more than two variables then we could
instead perform PCA and plot the first two principal component score
@@ -506,7 +519,7 @@ hc_comp = HClust(distance_threshold=0,
hc_comp.fit(X)
```
This computes the entire dendrogram.
We could just as easily perform hierarchical clustering with average or single linkage instead:
@@ -521,7 +534,7 @@ hc_sing = HClust(distance_threshold=0,
hc_sing.fit(X);
```
To use a precomputed distance matrix, we provide an additional
argument `metric="precomputed"`. In the code below, the first four lines computes the $50\times 50$ pairwise-distance matrix.
@@ -537,7 +550,7 @@ hc_sing_pre = HClust(distance_threshold=0,
hc_sing_pre.fit(D)
```
We use
`dendrogram()` from `scipy.cluster.hierarchy` to plot the dendrogram. However,
`dendrogram()` expects a so-called *linkage-matrix representation*
@@ -560,7 +573,7 @@ dendrogram(linkage_comp,
**cargs);
```
We may want to color branches of the tree above
and below a cut-threshold differently. This can be achieved
by changing the `color_threshold`. Lets cut the tree at a height of 4,
@@ -574,7 +587,7 @@ dendrogram(linkage_comp,
above_threshold_color='black');
```
To determine the cluster labels for each observation associated with a
given cut of the dendrogram, we can use the `cut_tree()`
function from `scipy.cluster.hierarchy`:
@@ -594,7 +607,7 @@ or `height` to `cut_tree()`.
cut_tree(linkage_comp, height=5)
```
To scale the variables before performing hierarchical clustering of
the observations, we use `StandardScaler()` as in our PCA example:
@@ -638,7 +651,7 @@ dendrogram(linkage_cor, ax=ax, **cargs)
ax.set_title("Complete Linkage with Correlation-Based Dissimilarity");
```
## NCI60 Data Example
Unsupervised techniques are often used in the analysis of genomic
@@ -653,7 +666,7 @@ nci_labs = NCI60['labels']
nci_data = NCI60['data']
```
Each cell line is labeled with a cancer type. We do not make use of
the cancer types in performing PCA and clustering, as these are
unsupervised techniques. But after performing PCA and clustering, we
@@ -666,8 +679,8 @@ The data has 64 rows and 6830 columns.
nci_data.shape
```
We begin by examining the cancer types for the cell lines.
@@ -675,7 +688,7 @@ We begin by examining the cancer types for the cell lines.
nci_labs.value_counts()
```
### PCA on the NCI60 Data
@@ -690,7 +703,7 @@ nci_pca = PCA()
nci_scores = nci_pca.fit_transform(nci_scaled)
```
We now plot the first few principal component score vectors, in order
to visualize the data. The observations (cell lines) corresponding to
a given cancer type will be plotted in the same color, so that we can
@@ -726,7 +739,7 @@ to have pretty similar gene expression levels.
We can also plot the percent variance
explained by the principal components as well as the cumulative percent variance explained.
This is similar to the plots we made earlier for the `USArrests` data.
@@ -785,7 +798,7 @@ def plot_nci(linkage, ax, cut=-np.inf):
return hc
```
Lets plot our results.
```{python}
@@ -817,7 +830,7 @@ pd.crosstab(nci_labs['label'],
pd.Series(comp_cut.reshape(-1), name='Complete'))
```
There are some clear patterns. All the leukemia cell lines fall in
one cluster, while the breast cancer cell lines are spread out over
@@ -831,7 +844,7 @@ plot_nci('Complete', ax, cut=140)
ax.axhline(140, c='r', linewidth=4);
```
The `axhline()` function draws a horizontal line line on top of any
existing set of axes. The argument `140` plots a horizontal
line at height 140 on the dendrogram; this is a height that
@@ -853,7 +866,7 @@ pd.crosstab(pd.Series(comp_cut, name='HClust'),
pd.Series(nci_kmeans.labels_, name='K-means'))
```
We see that the four clusters obtained using hierarchical clustering
and $K$-means clustering are somewhat different. First we note
that the labels in the two clusterings are arbitrary. That is, swapping

View File

@@ -3392,8 +3392,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,9 +1,22 @@
---
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.14.7
---
# Chapter 13
# Lab: Multiple Testing
We include our usual imports seen in earlier labs.
@@ -15,7 +28,7 @@ import statsmodels.api as sm
from ISLP import load_data
```
We also collect the new imports
needed for this lab.
@@ -47,7 +60,7 @@ true_mean = np.array([0.5]*50 + [0]*50)
X += true_mean[None,:]
```
To begin, we use `ttest_1samp()` from the
`scipy.stats` module to test $H_{0}: \mu_1=0$, the null
hypothesis that the first variable has mean zero.
@@ -57,7 +70,7 @@ result = ttest_1samp(X[:,0], 0)
result.pvalue
```
The $p$-value comes out to 0.931, which is not low enough to
reject the null hypothesis at level $\alpha=0.05$. In this case,
$\mu_1=0.5$, so the null hypothesis is false. Therefore, we have made
@@ -154,7 +167,7 @@ ax.legend()
ax.axhline(0.05, c='k', ls='--');
```
As discussed previously, even for moderate values of $m$ such as $50$,
the FWER exceeds $0.05$ unless $\alpha$ is set to a very low value,
such as $0.001$. Of course, the problem with setting $\alpha$ to such
@@ -176,7 +189,7 @@ for i in range(5):
fund_mini_pvals
```
The $p$-values are low for Managers One and Three, and high for the
other three managers. However, we cannot simply reject $H_{0,1}$ and
$H_{0,3}$, since this would fail to account for the multiple testing
@@ -206,8 +219,8 @@ reject, bonf = mult_test(fund_mini_pvals, method = "bonferroni")[:2]
reject
```
The $p$-values `bonf` are simply the `fund_mini_pvalues` multiplied by 5 and truncated to be less than
or equal to 1.
@@ -215,7 +228,7 @@ or equal to 1.
bonf, np.minimum(fund_mini_pvals * 5, 1)
```
Therefore, using Bonferronis method, we are able to reject the null hypothesis only for Manager
One while controlling FWER at $0.05$.
@@ -227,8 +240,8 @@ hypotheses for Managers One and Three at a FWER of $0.05$.
mult_test(fund_mini_pvals, method = "holm", alpha=0.05)[:2]
```
As discussed previously, Manager One seems to perform particularly
well, whereas Manager Two has poor performance.
@@ -237,8 +250,8 @@ well, whereas Manager Two has poor performance.
fund_mini.mean()
```
Is there evidence of a meaningful difference in performance between
these two managers? We can check this by performing a paired $t$-test using the `ttest_rel()` function
from `scipy.stats`:
@@ -248,7 +261,7 @@ ttest_rel(fund_mini['Manager1'],
fund_mini['Manager2']).pvalue
```
The test results in a $p$-value of 0.038,
suggesting a statistically significant difference.
@@ -273,8 +286,8 @@ tukey = pairwise_tukeyhsd(returns, managers)
print(tukey.summary())
```
The `pairwise_tukeyhsd()` function provides confidence intervals
for the difference between each pair of managers (`lower` and
`upper`), as well as a $p$-value. All of these quantities have
@@ -304,7 +317,7 @@ for i, manager in enumerate(Fund.columns):
fund_pvalues[i] = ttest_1samp(Fund[manager], 0).pvalue
```
There are far too many managers to consider trying to control the FWER.
Instead, we focus on controlling the FDR: that is, the expected fraction of rejected null hypotheses that are actually false positives.
The `multipletests()` function (abbreviated `mult_test()`) can be used to carry out the Benjamini--Hochberg procedure.
@@ -314,7 +327,7 @@ fund_qvalues = mult_test(fund_pvalues, method = "fdr_bh")[1]
fund_qvalues[:10]
```
The *q-values* output by the
Benjamini--Hochberg procedure can be interpreted as the smallest FDR
threshold at which we would reject a particular null hypothesis. For
@@ -341,8 +354,8 @@ null hypotheses!
(fund_pvalues <= 0.1 / 2000).sum()
```
Figure 13.6 displays the ordered
$p$-values, $p_{(1)} \leq p_{(2)} \leq \cdots \leq p_{(2000)}$, for
the `Fund` dataset, as well as the threshold for rejection by the
@@ -371,7 +384,7 @@ else:
sorted_set_ = []
```
We now reproduce the middle panel of Figure 13.6.
```{python}
@@ -386,7 +399,7 @@ ax.scatter(sorted_set_+1, sorted_[sorted_set_], c='r', s=20)
ax.axline((0, 0), (1,q/m), c='k', ls='--', linewidth=3);
```
## A Re-Sampling Approach
Here, we implement the re-sampling approach to hypothesis testing
@@ -402,8 +415,8 @@ D['Y'] = pd.concat([Khan['ytrain'], Khan['ytest']])
D['Y'].value_counts()
```
There are four classes of cancer. For each gene, we compare the mean
expression in the second class (rhabdomyosarcoma) to the mean
expression in the fourth class (Burkitts lymphoma). Performing a
@@ -423,8 +436,8 @@ observedT, pvalue = ttest_ind(D2[gene_11],
observedT, pvalue
```
However, this $p$-value relies on the assumption that under the null
hypothesis of no difference between the two groups, the test statistic
follows a $t$-distribution with $29+25-2=52$ degrees of freedom.
@@ -452,8 +465,8 @@ for b in range(B):
(np.abs(Tnull) > np.abs(observedT)).mean()
```
This fraction, 0.0398,
is our re-sampling-based $p$-value.
It is almost identical to the $p$-value of 0.0412 obtained using the theoretical null distribution.
@@ -509,7 +522,7 @@ for j in range(m):
Tnull_vals[j,b] = ttest_.statistic
```
Next, we compute the number of rejected null hypotheses $R$, the
estimated number of false positives $\widehat{V}$, and the estimated
FDR, for a range of threshold values $c$ in
@@ -527,7 +540,7 @@ for j in range(m):
FDRs[j] = V / R
```
Now, for any given FDR, we can find the genes that will be
rejected. For example, with FDR controlled at 0.1, we reject 15 of the
100 null hypotheses. On average, we would expect about one or two of
@@ -543,7 +556,7 @@ the genes whose estimated FDR is less than 0.1.
sorted(idx[np.abs(T_vals) >= cutoffs[FDRs < 0.1].min()])
```
At an FDR threshold of 0.2, more genes are selected, at the cost of having a higher expected
proportion of false discoveries.
@@ -551,7 +564,7 @@ proportion of false discoveries.
sorted(idx[np.abs(T_vals) >= cutoffs[FDRs < 0.2].min()])
```
The next line generates Figure 13.11, which is similar
to Figure 13.9,
except that it is based on only a subset of the genes.

View File

@@ -1578,8 +1578,8 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
"formats": "ipynb,Rmd",
"main_language": "python"
},
"language_info": {
"codemirror_mode": {