updating version

This commit is contained in:
Jonathan Taylor
2026-02-02 15:51:46 -08:00
parent 681c8d2b4d
commit 39a00cc02d
26 changed files with 1528 additions and 2240 deletions

View File

@@ -2,22 +2,22 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.16.7
jupytext_version: 1.19.1
---
# Introduction to Python
<a target="_blank" href="https://colab.research.google.com/github/intro-stat-learning/ISLP_labs/blob/v2.2/Ch02-statlearn-lab.ipynb">
<a target="_blank" href="https://colab.research.google.com/github/intro-stat-learning/ISLP_labs/blob/v2.2.1/Ch02-statlearn-lab.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/intro-stat-learning/ISLP_labs/v2.2?labpath=Ch02-statlearn-lab.ipynb)
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/intro-stat-learning/ISLP_labs/v2.2.1?labpath=Ch02-statlearn-lab.ipynb)
@@ -83,7 +83,7 @@ print('fit a model with', 11, 'variables')
The following command will provide information about the `print()` function.
```{python}
# print?
print?
```
@@ -233,7 +233,7 @@ documentation associated with the function `fun`, if it exists.
We can try this for `np.array()`.
```{python}
# np.array?
np.array?
```
This documentation indicates that we could create a floating point array by passing a `dtype` argument into `np.array()`.
@@ -904,7 +904,6 @@ A[np.array([0,1,0,1])]
By contrast, `keep_rows` retrieves only the second and fourth rows of `A` --- i.e. the rows for which the Boolean equals `True`.
```{python}
A[keep_rows]
@@ -978,9 +977,10 @@ Auto
The book website also has a whitespace-delimited version of this data, called `Auto.data`. This can be read in as follows:
```{python}
Auto = pd.read_csv('Auto.data', delim_whitespace=True)
Auto = pd.read_csv("Auto.data", sep="\\s+")
```
Both `Auto.csv` and `Auto.data` are simply text
files. Before loading data into `Python`, it is a good idea to view it using
a text editor or other software, such as Microsoft Excel.
@@ -1015,12 +1015,11 @@ value `np.nan`, which means *not a number*:
```{python}
Auto = pd.read_csv('Auto.data',
na_values=['?'],
delim_whitespace=True)
sep="\\s+")
Auto['horsepower'].sum()
```
The `Auto.shape` attribute tells us that the data has 397
observations, or rows, and nine variables, or columns.
@@ -1293,14 +1292,13 @@ because `Python` does not know to look in the `Auto` data set for those variab
```{python}
fig, ax = subplots(figsize=(8, 8))
ax.plot(horsepower, mpg, 'o');
ax.plot(horsepower, mpg, 'o')
```
We can address this by accessing the columns directly:
```{python}
fig, ax = subplots(figsize=(8, 8))
ax.plot(Auto['horsepower'], Auto['mpg'], 'o');
```
Alternatively, we can use the `plot()` method with the call `Auto.plot()`.
Using this method,
@@ -1402,8 +1400,7 @@ Auto['cylinders'].describe()
Auto['mpg'].describe()
```
To exit `Jupyter`, select `File / Shut Down`.
```{python}
```

File diff suppressed because one or more lines are too long

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -1507,8 +1507,8 @@
"source": [
"ax = Boston.plot.scatter('lstat', 'medv')\n",
"abline(ax,\n",
" results.params[0],\n",
" results.params[1],\n",
" results.params['intercept'],\n",
" results.params['lstat'],\n",
" 'r--',\n",
" linewidth=3)\n"
]
@@ -2970,9 +2970,14 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
@@ -2983,7 +2988,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -6241,7 +6241,7 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"language_info": {

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -1283,9 +1283,14 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
@@ -1296,7 +1301,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd
@@ -148,6 +148,7 @@ hitters_MSE = sklearn_selected(OLS,
strategy)
hitters_MSE.fit(Hitters, Y)
hitters_MSE.selected_state_
```
Using `neg_Cp` results in a smaller model, as expected, with just 10 variables selected.

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -3296,7 +3296,7 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"language_info": {

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -3378,7 +3378,7 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"language_info": {

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -2701,7 +2701,7 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"language_info": {

View File

@@ -2,13 +2,13 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd
format_name: rmarkdown
format_version: '1.2'
jupytext_version: 1.15.0
jupytext_version: 1.19.1
---
# Deep Learning
@@ -1861,7 +1861,3 @@ nl_trainer.test(nl_module, datamodule=day_dm)
```{python}
```

View File

@@ -4565,7 +4565,7 @@
"datasets = []\n",
"for mask in [train, ~train]:\n",
" X_rnn_t = torch.tensor(X_rnn[mask].astype(np.float32))\n",
" Y_t = torch.tensor(Y[mask].astype(np.float32))\n",
" Y_t = torch.tensor(np.asarray(Y[mask], np.float32))\n",
" datasets.append(TensorDataset(X_rnn_t, Y_t))\n",
"nyse_train, nyse_test = datasets\n"
]
@@ -4955,8 +4955,7 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"main_language": "python"
"formats": "ipynb"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
@@ -4973,7 +4972,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -2733,7 +2733,7 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"language_info": {

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -4622,7 +4622,7 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"language_info": {

View File

@@ -2,7 +2,7 @@
jupyter:
jupytext:
cell_metadata_filter: -all
formats: ipynb,Rmd
formats: Rmd
main_language: python
text_representation:
extension: .Rmd

View File

@@ -1584,7 +1584,7 @@
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all",
"formats": "ipynb,Rmd",
"formats": "ipynb",
"main_language": "python"
},
"language_info": {

View File

@@ -34,7 +34,7 @@ intent is that building a virtual environment with
To install the current version of the requirements run
```
pip install -r https://raw.githubusercontent.com/intro-stat-learning/ISLP_labs/v2.2/requirements.txt;
pip install -r https://raw.githubusercontent.com/intro-stat-learning/ISLP_labs/v2.2.1/requirements.txt;
```
The labs can now be run via:
@@ -46,7 +46,7 @@ jupyter lab Ch02-statlearn-lab.ipynb
# Zip / tarball
You can download all the labs as a `.zip` or `.tar.gz` [here](https://github.com/intro-stat-learning/ISLP_labs/releases/tag/v2.2)
You can download all the labs as a `.zip` or `.tar.gz` [here](https://github.com/intro-stat-learning/ISLP_labs/releases/tag/v2.2.1)
## Contributors ✨

View File

@@ -1,16 +1,16 @@
numpy==1.26.4
scipy==1.11.4
pandas==2.2.2
lxml==5.2.2
scikit-learn==1.5.0
joblib==1.4.2
statsmodels==0.14.2
lifelines==0.28.0
pygam==0.9.1
l0bnb==1.0.0
torch==2.3.0
torchvision==0.18.0
pytorch-lightning==2.2.5
torchinfo==1.8.0
torchmetrics==1.4.0.post0
ISLP==0.4.0
numpy==2.4.2
scipy==1.16.3
pandas==3.0.0
lxml==6.0.2
scikit-learn==1.8.0
joblib==1.5.3
statsmodels==0.14.6
lifelines==0.30.0
pygam==0.12.0
git+https://github.com/jonathan-taylor/l0bnb.git@fix_inf#egg=l0bnb
torch==2.10.0
torchvision==0.25.0
pytorch-lightning==2.6.1
torchinfo==1.8.0
torchmetrics==1.8.2
ISLP==0.4.1