像长短期记忆 (LSTM) 循环神经网络这样的神经网络能够几乎无缝地对具有多个输入变量的问题进行建模。这在时间序列预测中是一个很大的好处,其中经典的线性方法可能难以适应多变量或多输入预测问题。在本教程中,你将了解如何使用 Keras 深度学习库开发用于多元时间序列预测的 LSTM 模型。
完成本教程后,你将了解:
本教程分为 4 个部分:
在本教程中,我们将使用空气质量数据集,该数据集报告美国驻中国北京大使馆五年内每小时的天气和污染程度。
数据包括日期时间、PM2.5 浓度,以及包括露点、温度、压力、风向、风速和雨雪累积小时数在内的天气信息。原始数据中完整的特征列表如下:
No,year,month,day,hour,pm2.5,DEWP,TEMP,PRES,cbwd,Iws,Is,Ir
1,2010,1,1,0,NA,-21,-11,1021,NW,1.79,0,0
2,2010,1,1,1,NA,-21,-12,1020,NW,4.92,0,0
3,2010,1,1,2,NA,-21,-11,1019,NW,6.71,0,0
4,2010,1,1,3,NA,-21,-14,1019,NW,9.84,0,0
5,2010,1,1,4,NA,-20,-12,1018,NW,12.97,0,0
第一步是将日期时间信息整合到单个日期时间中,以便我们可以将其用作 Pandas 中的索引。快速检查显示前 24 小时 pm2.5 的 NA 值。因此,我们需要删除第一行数据。数据集中后面还有一些分散的“NA”值;我们现在可以用 0 值标记它们。
下面的脚本加载原始数据集并将日期时间信息解析为 Pandas DataFrame 索引。“否”列被删除,然后为每列指定更清晰的名称。最后,将 NA 值替换为“0”值并删除前 24 小时。
“否”列被删除,然后为每列指定更清晰的名称。最后,将 NA 值替换为“0”值并删除前 24 小时。
将每个系列绘制为单独的子图:
第一步是为 LSTM 准备污染数据集。
这涉及将数据集构建为监督学习问题并对输入变量进行规范化。我们将监督学习问题构建为在给定前一时间步的污染测量和天气条件的情况下预测当前时间 (t) 的污染。
# prepare data for lstm
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# load dataset
dataset = read_csv('pollution.csv', header=0, index_col=0)
values = dataset.values
# integer encode direction
encoder = LabelEncoder()
values[:,4] = encoder.fit_transform(values[:,4])
# ensure all data is float
values = values.astype('float32')
# normalize features
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# frame as supervised learning
reframed = series_to_supervised(scaled, 1, 1)
# drop columns we don't want to predict
reframed.drop(reframed.columns[[9,10,11,12,13,14,15]], axis=1, inplace=True)
print(reframed.head())
下面的示例将数据集拆分为训练集和测试集,然后将训练集和测试集拆分为输入和输出变量。最后,输入 (X) 被重塑为 LSTM 所期望的 3D 格式,即[样本、时间步长、特征]。
...
# split into train and test sets
values = reframed.values
n_train_hours = 365 * 24
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
# split into input and outputs
train_X, train_y = train[:, :-1], train[:, -1]
test_X, test_y = test[:, :-1], test[:, -1]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
我们定义 的LSTM模型种,在第一个隐藏层有 50 个神经元,在输出层有 1 个神经元来预测污染。输入形状将是 1 个时间步长,包含 8 个特征。
我们将使用平均绝对误差 (MAE) 损失函数和随机梯度下降的有效 Adam 版本。
该模型将适合 50 个训练 epoch,批量大小为 72。请记住,Keras 中 LSTM 的内部状态会在每个批次结束时重置,因此作为天数函数的内部状态可能是有帮助(尝试测试这个)。
最后,我们通过在 fit() 函数中设置validation_data参数来跟踪训练期间的训练和测试损失。在运行结束时,训练和测试损失都被绘制出来。
...
# design network
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=50, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
from math import sqrt
from numpy import concatenate
from matplotlib import pyplot
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import mean_squared_error
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# load dataset
dataset = read_csv('data/pollution.csv', header=0, index_col=0)
values = dataset.values
# integer encode direction
encoder = LabelEncoder()
values[:,4] = encoder.fit_transform(values[:,4])
# ensure all data is float
values = values.astype('float32')
# normalize features
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# frame as supervised learning
reframed = series_to_supervised(scaled, 1, 1)
# drop columns we don't want to predict
reframed.drop(reframed.columns[[9,10,11,12,13,14,15]], axis=1, inplace=True)
print(reframed.head())
# split into train and test sets
values = reframed.values
n_train_hours = 365 * 24
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
# split into input and outputs
train_X, train_y = train[:, :-1], train[:, -1]
test_X, test_y = test[:, :-1], test[:, -1]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
# design network
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=50, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
# make a prediction
yhat = model.predict(test_X)
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
# invert scaling for forecast
inv_yhat = concatenate((yhat, test_X[:, 1:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]
# invert scaling for actual
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_y, test_X[:, 1:]), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,0]
# calculate RMSE
rmse = sqrt(mean_squared_error(inv_y, inv_yhat))
print('Test RMSE: %.3f' % rmse)
资料来源
from pandas import read_csv
from datetime import datetime
# load data
def parse(x):
return datetime.strptime(x, '%Y %m %d %H')
dataset = read_csv('data/pollution.csv', parse_dates = [['year', 'month', 'day', 'hour']], index_col=0, date_parser=parse)
dataset.drop('No', axis=1, inplace=True)
# manually specify column names
dataset.columns = ['pollution', 'dew', 'temp', 'press', 'wnd_dir', 'wnd_spd', 'snow', 'rain']
dataset.index.name = 'date'
# mark all NA values with 0
dataset['pollution'].fillna(0, inplace=True)
# drop the first 24 hours
dataset = dataset[24:]
# summarize first 5 rows
print(dataset.head(5))
# save to file
dataset.to_csv('data/pollution_clean.csv')
pollution dew temp press wnd_dir wnd_spd snow rain date 2010-01-02 00:00:00 129.0 -16 -4.0 1020.0 SE 1.79 0 0 2010-01-02 01:00:00 148.0 -15 -4.0 1020.0 SE 2.68 0 0 2010-01-02 02:00:00 159.0 -11 -5.0 1021.0 SE 3.57 0 0 2010-01-02 03:00:00 181.0 -7 -5.0 1022.0 SE 5.36 1 0 2010-01-02 04:00:00 138.0 -7 -5.0 1022.0 SE 6.25 2 0
from pandas import read_csv
from matplotlib import pyplot
# load dataset
dataset = read_csv('data/pollution_clean.csv', header=0, index_col=0)
values = dataset.values
# specify columns to plot
groups = [0, 1, 2, 3, 5, 6, 7]
i = 1
# plot each column
pyplot.figure()
for group in groups:
pyplot.subplot(len(groups), 1, i)
pyplot.plot(values[:, group])
pyplot.title(dataset.columns[group], y=0.5, loc='right')
i += 1
pyplot.show()
# prepare data for lstm
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# load dataset
dataset = read_csv('data/pollution_clean.csv', header=0, index_col=0)
values = dataset.values
# integer encode direction
encoder = LabelEncoder()
values[:,4] = encoder.fit_transform(values[:,4])
# ensure all data is float
values = values.astype('float32')
# normalize features
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# frame as supervised learning
reframed = series_to_supervised(scaled, 1, 1)
# drop columns we don't want to predict
reframed.drop(reframed.columns[[9,10,11,12,13,14,15]], axis=1, inplace=True)
print(reframed.head())
var1(t-1) var2(t-1) var3(t-1) var4(t-1) var5(t-1) var6(t-1) \ 1 0.129779 0.352941 0.245902 0.527273 0.666667 0.002290 2 0.148893 0.367647 0.245902 0.527273 0.666667 0.003811 3 0.159960 0.426471 0.229508 0.545454 0.666667 0.005332 4 0.182093 0.485294 0.229508 0.563637 0.666667 0.008391 5 0.138833 0.485294 0.229508 0.563637 0.666667 0.009912 var7(t-1) var8(t-1) var1(t) 1 0.000000 0.0 0.148893 2 0.000000 0.0 0.159960 3 0.000000 0.0 0.182093 4 0.037037 0.0 0.138833 5 0.074074 0.0 0.109658
# split into train and test sets
values = reframed.values
n_train_hours = 365 * 24
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
# split into input and outputs
train_X, train_y = train[:, :-1], train[:, -1]
test_X, test_y = test[:, :-1], test[:, -1]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
(8760, 1, 8) (8760,) (35039, 1, 8) (35039,)
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
# design network
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=50, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
Epoch 1/50 122/122 - 3s - loss: 0.0559 - val_loss: 0.0556 - 3s/epoch - 21ms/step Epoch 2/50 122/122 - 1s - loss: 0.0383 - val_loss: 0.0605 - 745ms/epoch - 6ms/step Epoch 3/50 122/122 - 1s - loss: 0.0216 - val_loss: 0.0572 - 735ms/epoch - 6ms/step Epoch 4/50 122/122 - 1s - loss: 0.0169 - val_loss: 0.0448 - 652ms/epoch - 5ms/step Epoch 5/50 122/122 - 1s - loss: 0.0158 - val_loss: 0.0312 - 694ms/epoch - 6ms/step Epoch 6/50 122/122 - 1s - loss: 0.0152 - val_loss: 0.0259 - 702ms/epoch - 6ms/step Epoch 7/50 122/122 - 1s - loss: 0.0152 - val_loss: 0.0240 - 777ms/epoch - 6ms/step Epoch 8/50 122/122 - 1s - loss: 0.0149 - val_loss: 0.0211 - 789ms/epoch - 6ms/step Epoch 9/50 122/122 - 1s - loss: 0.0149 - val_loss: 0.0194 - 777ms/epoch - 6ms/step Epoch 10/50 122/122 - 1s - loss: 0.0149 - val_loss: 0.0172 - 666ms/epoch - 5ms/step Epoch 11/50 122/122 - 1s - loss: 0.0148 - val_loss: 0.0161 - 762ms/epoch - 6ms/step Epoch 12/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0150 - 756ms/epoch - 6ms/step Epoch 13/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0147 - 800ms/epoch - 7ms/step Epoch 14/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0149 - 751ms/epoch - 6ms/step Epoch 15/50 122/122 - 1s - loss: 0.0147 - val_loss: 0.0143 - 732ms/epoch - 6ms/step Epoch 16/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0144 - 785ms/epoch - 6ms/step Epoch 17/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0142 - 761ms/epoch - 6ms/step Epoch 18/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0141 - 700ms/epoch - 6ms/step Epoch 19/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0143 - 670ms/epoch - 5ms/step Epoch 20/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0140 - 682ms/epoch - 6ms/step Epoch 21/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0139 - 685ms/epoch - 6ms/step Epoch 22/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0137 - 708ms/epoch - 6ms/step Epoch 23/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0136 - 703ms/epoch - 6ms/step Epoch 24/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0141 - 716ms/epoch - 6ms/step Epoch 25/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0140 - 759ms/epoch - 6ms/step Epoch 26/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0139 - 659ms/epoch - 5ms/step Epoch 27/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0138 - 676ms/epoch - 6ms/step Epoch 28/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0138 - 700ms/epoch - 6ms/step Epoch 29/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0135 - 729ms/epoch - 6ms/step Epoch 30/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0135 - 647ms/epoch - 5ms/step Epoch 31/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0141 - 680ms/epoch - 6ms/step Epoch 32/50 122/122 - 1s - loss: 0.0147 - val_loss: 0.0137 - 675ms/epoch - 6ms/step Epoch 33/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0136 - 693ms/epoch - 6ms/step Epoch 34/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0137 - 721ms/epoch - 6ms/step Epoch 35/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0144 - 701ms/epoch - 6ms/step Epoch 36/50 122/122 - 1s - loss: 0.0147 - val_loss: 0.0139 - 697ms/epoch - 6ms/step Epoch 37/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0137 - 672ms/epoch - 6ms/step Epoch 38/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0135 - 703ms/epoch - 6ms/step Epoch 39/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0134 - 679ms/epoch - 6ms/step Epoch 40/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0137 - 720ms/epoch - 6ms/step Epoch 41/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0141 - 692ms/epoch - 6ms/step Epoch 42/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0136 - 733ms/epoch - 6ms/step Epoch 43/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0136 - 705ms/epoch - 6ms/step Epoch 44/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0134 - 692ms/epoch - 6ms/step Epoch 45/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0134 - 732ms/epoch - 6ms/step Epoch 46/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0133 - 651ms/epoch - 5ms/step Epoch 47/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0133 - 677ms/epoch - 6ms/step Epoch 48/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0133 - 677ms/epoch - 6ms/step Epoch 49/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0134 - 666ms/epoch - 5ms/step Epoch 50/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0133 - 666ms/epoch - 5ms/step
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
from math import sqrt
from numpy import concatenate
from matplotlib import pyplot
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import mean_squared_error
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# load dataset
dataset = read_csv('data/pollution_clean.csv', header=0, index_col=0)
values = dataset.values
# integer encode direction
encoder = LabelEncoder()
values[:,4] = encoder.fit_transform(values[:,4])
# ensure all data is float
values = values.astype('float32')
# normalize features
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# frame as supervised learning
reframed = series_to_supervised(scaled, 1, 1)
# drop columns we don't want to predict
reframed.drop(reframed.columns[[9,10,11,12,13,14,15]], axis=1, inplace=True)
print(reframed.head())
# split into train and test sets
values = reframed.values
n_train_hours = 365 * 24
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
# split into input and outputs
train_X, train_y = train[:, :-1], train[:, -1]
test_X, test_y = test[:, :-1], test[:, -1]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
# design network
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=50, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
# make a prediction
yhat = model.predict(test_X)
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
# invert scaling for forecast
inv_yhat = concatenate((yhat, test_X[:, 1:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]
# invert scaling for actual
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_y, test_X[:, 1:]), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,0]
# calculate RMSE
rmse = sqrt(mean_squared_error(inv_y, inv_yhat))
print('Test RMSE: %.3f' % rmse)
var1(t-1) var2(t-1) var3(t-1) var4(t-1) var5(t-1) var6(t-1) \ 1 0.129779 0.352941 0.245902 0.527273 0.666667 0.002290 2 0.148893 0.367647 0.245902 0.527273 0.666667 0.003811 3 0.159960 0.426471 0.229508 0.545454 0.666667 0.005332 4 0.182093 0.485294 0.229508 0.563637 0.666667 0.008391 5 0.138833 0.485294 0.229508 0.563637 0.666667 0.009912 var7(t-1) var8(t-1) var1(t) 1 0.000000 0.0 0.148893 2 0.000000 0.0 0.159960 3 0.000000 0.0 0.182093 4 0.037037 0.0 0.138833 5 0.074074 0.0 0.109658 (8760, 1, 8) (8760,) (35039, 1, 8) (35039,) Epoch 1/50 122/122 - 3s - loss: 0.0577 - val_loss: 0.0531 - 3s/epoch - 22ms/step Epoch 2/50 122/122 - 1s - loss: 0.0410 - val_loss: 0.0570 - 807ms/epoch - 7ms/step Epoch 3/50 122/122 - 1s - loss: 0.0245 - val_loss: 0.0484 - 720ms/epoch - 6ms/step Epoch 4/50 122/122 - 1s - loss: 0.0172 - val_loss: 0.0396 - 655ms/epoch - 5ms/step Epoch 5/50 122/122 - 1s - loss: 0.0157 - val_loss: 0.0258 - 644ms/epoch - 5ms/step Epoch 6/50 122/122 - 1s - loss: 0.0151 - val_loss: 0.0196 - 690ms/epoch - 6ms/step Epoch 7/50 122/122 - 1s - loss: 0.0148 - val_loss: 0.0179 - 690ms/epoch - 6ms/step Epoch 8/50 122/122 - 1s - loss: 0.0147 - val_loss: 0.0169 - 682ms/epoch - 6ms/step Epoch 9/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0166 - 673ms/epoch - 6ms/step Epoch 10/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0161 - 657ms/epoch - 5ms/step Epoch 11/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0156 - 688ms/epoch - 6ms/step Epoch 12/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0152 - 686ms/epoch - 6ms/step Epoch 13/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0152 - 666ms/epoch - 5ms/step Epoch 14/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0148 - 691ms/epoch - 6ms/step Epoch 15/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0148 - 706ms/epoch - 6ms/step Epoch 16/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0146 - 668ms/epoch - 5ms/step Epoch 17/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0147 - 691ms/epoch - 6ms/step Epoch 18/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0145 - 674ms/epoch - 6ms/step Epoch 19/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0149 - 692ms/epoch - 6ms/step Epoch 20/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0148 - 647ms/epoch - 5ms/step Epoch 21/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0146 - 686ms/epoch - 6ms/step Epoch 22/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0147 - 794ms/epoch - 7ms/step Epoch 23/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0145 - 718ms/epoch - 6ms/step Epoch 24/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0145 - 696ms/epoch - 6ms/step Epoch 25/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0144 - 752ms/epoch - 6ms/step Epoch 26/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0145 - 756ms/epoch - 6ms/step Epoch 27/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0144 - 709ms/epoch - 6ms/step Epoch 28/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0144 - 734ms/epoch - 6ms/step Epoch 29/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0142 - 665ms/epoch - 5ms/step Epoch 30/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0143 - 661ms/epoch - 5ms/step Epoch 31/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0141 - 669ms/epoch - 5ms/step Epoch 32/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0143 - 689ms/epoch - 6ms/step Epoch 33/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0143 - 685ms/epoch - 6ms/step Epoch 34/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0141 - 727ms/epoch - 6ms/step Epoch 35/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0141 - 677ms/epoch - 6ms/step Epoch 36/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0142 - 709ms/epoch - 6ms/step Epoch 37/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0141 - 679ms/epoch - 6ms/step Epoch 38/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0142 - 687ms/epoch - 6ms/step Epoch 39/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0142 - 677ms/epoch - 6ms/step Epoch 40/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0141 - 670ms/epoch - 5ms/step Epoch 41/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0139 - 666ms/epoch - 5ms/step Epoch 42/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0139 - 678ms/epoch - 6ms/step Epoch 43/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0147 - 670ms/epoch - 5ms/step Epoch 44/50 122/122 - 1s - loss: 0.0148 - val_loss: 0.0143 - 682ms/epoch - 6ms/step Epoch 45/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0139 - 695ms/epoch - 6ms/step Epoch 46/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0139 - 682ms/epoch - 6ms/step Epoch 47/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0137 - 669ms/epoch - 5ms/step Epoch 48/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0137 - 729ms/epoch - 6ms/step Epoch 49/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0137 - 727ms/epoch - 6ms/step Epoch 50/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0138 - 663ms/epoch - 5ms/step
1095/1095 [==============================] - 2s 1ms/step Test RMSE: 26.714
from math import sqrt
from numpy import concatenate
from matplotlib import pyplot
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import mean_squared_error
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# load dataset
dataset = read_csv('data/pollution_clean.csv', header=0, index_col=0)
values = dataset.values
# integer encode direction
encoder = LabelEncoder()
values[:,4] = encoder.fit_transform(values[:,4])
# ensure all data is float
values = values.astype('float32')
# normalize features
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# specify the number of lag hours
n_hours = 3
n_features = 8
# frame as supervised learning
reframed = series_to_supervised(scaled, n_hours, 1)
print(reframed.shape)
# split into train and test sets
values = reframed.values
n_train_hours = 365 * 24
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
# split into input and outputs
n_obs = n_hours * n_features
train_X, train_y = train[:, :n_obs], train[:, -n_features]
test_X, test_y = test[:, :n_obs], test[:, -n_features]
print(train_X.shape, len(train_X), train_y.shape)
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], n_hours, n_features))
test_X = test_X.reshape((test_X.shape[0], n_hours, n_features))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
# design network
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=50, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
# make a prediction
yhat = model.predict(test_X)
test_X = test_X.reshape((test_X.shape[0], n_hours*n_features))
# invert scaling for forecast
inv_yhat = concatenate((yhat, test_X[:, -7:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]
# invert scaling for actual
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_y, test_X[:, -7:]), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,0]
# calculate RMSE
rmse = sqrt(mean_squared_error(inv_y, inv_yhat))
print('Test RMSE: %.3f' % rmse)
(43797, 32) (8760, 24) 8760 (8760,) (8760, 3, 8) (8760,) (35037, 3, 8) (35037,) Epoch 1/50 122/122 - 3s - loss: 0.0515 - val_loss: 0.0395 - 3s/epoch - 24ms/step Epoch 2/50 122/122 - 1s - loss: 0.0253 - val_loss: 0.0266 - 999ms/epoch - 8ms/step Epoch 3/50 122/122 - 1s - loss: 0.0209 - val_loss: 0.0198 - 965ms/epoch - 8ms/step Epoch 4/50 122/122 - 1s - loss: 0.0206 - val_loss: 0.0188 - 978ms/epoch - 8ms/step Epoch 5/50 122/122 - 1s - loss: 0.0207 - val_loss: 0.0186 - 1s/epoch - 9ms/step Epoch 6/50 122/122 - 1s - loss: 0.0201 - val_loss: 0.0185 - 1s/epoch - 8ms/step Epoch 7/50 122/122 - 1s - loss: 0.0191 - val_loss: 0.0179 - 1s/epoch - 9ms/step Epoch 8/50 122/122 - 1s - loss: 0.0188 - val_loss: 0.0179 - 1s/epoch - 9ms/step Epoch 9/50 122/122 - 1s - loss: 0.0186 - val_loss: 0.0174 - 1s/epoch - 9ms/step Epoch 10/50 122/122 - 1s - loss: 0.0182 - val_loss: 0.0169 - 964ms/epoch - 8ms/step Epoch 11/50 122/122 - 1s - loss: 0.0175 - val_loss: 0.0165 - 969ms/epoch - 8ms/step Epoch 12/50 122/122 - 1s - loss: 0.0172 - val_loss: 0.0165 - 1s/epoch - 9ms/step Epoch 13/50 122/122 - 1s - loss: 0.0171 - val_loss: 0.0164 - 991ms/epoch - 8ms/step Epoch 14/50 122/122 - 1s - loss: 0.0164 - val_loss: 0.0157 - 972ms/epoch - 8ms/step Epoch 15/50 122/122 - 1s - loss: 0.0165 - val_loss: 0.0160 - 968ms/epoch - 8ms/step Epoch 16/50 122/122 - 1s - loss: 0.0158 - val_loss: 0.0156 - 892ms/epoch - 7ms/step Epoch 17/50 122/122 - 1s - loss: 0.0158 - val_loss: 0.0162 - 954ms/epoch - 8ms/step Epoch 18/50 122/122 - 1s - loss: 0.0153 - val_loss: 0.0161 - 890ms/epoch - 7ms/step Epoch 19/50 122/122 - 1s - loss: 0.0153 - val_loss: 0.0169 - 994ms/epoch - 8ms/step Epoch 20/50 122/122 - 1s - loss: 0.0149 - val_loss: 0.0167 - 944ms/epoch - 8ms/step Epoch 21/50 122/122 - 1s - loss: 0.0149 - val_loss: 0.0170 - 852ms/epoch - 7ms/step Epoch 22/50 122/122 - 1s - loss: 0.0148 - val_loss: 0.0166 - 914ms/epoch - 7ms/step Epoch 23/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0168 - 824ms/epoch - 7ms/step Epoch 24/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0164 - 824ms/epoch - 7ms/step Epoch 25/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0163 - 918ms/epoch - 8ms/step Epoch 26/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0164 - 871ms/epoch - 7ms/step Epoch 27/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0160 - 948ms/epoch - 8ms/step Epoch 28/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0155 - 870ms/epoch - 7ms/step Epoch 29/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0152 - 836ms/epoch - 7ms/step Epoch 30/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0149 - 821ms/epoch - 7ms/step Epoch 31/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0150 - 828ms/epoch - 7ms/step Epoch 32/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0150 - 814ms/epoch - 7ms/step Epoch 33/50 122/122 - 1s - loss: 0.0145 - val_loss: 0.0152 - 830ms/epoch - 7ms/step Epoch 34/50 122/122 - 1s - loss: 0.0146 - val_loss: 0.0150 - 824ms/epoch - 7ms/step Epoch 35/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0148 - 838ms/epoch - 7ms/step Epoch 36/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0145 - 833ms/epoch - 7ms/step Epoch 37/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0143 - 831ms/epoch - 7ms/step Epoch 38/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0144 - 801ms/epoch - 7ms/step Epoch 39/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0145 - 779ms/epoch - 6ms/step Epoch 40/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0149 - 816ms/epoch - 7ms/step Epoch 41/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0145 - 793ms/epoch - 7ms/step Epoch 42/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0144 - 802ms/epoch - 7ms/step Epoch 43/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0143 - 797ms/epoch - 7ms/step Epoch 44/50 122/122 - 1s - loss: 0.0142 - val_loss: 0.0143 - 793ms/epoch - 6ms/step Epoch 45/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0141 - 794ms/epoch - 7ms/step Epoch 46/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0139 - 818ms/epoch - 7ms/step Epoch 47/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0140 - 811ms/epoch - 7ms/step Epoch 48/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0139 - 785ms/epoch - 6ms/step Epoch 49/50 122/122 - 1s - loss: 0.0144 - val_loss: 0.0140 - 810ms/epoch - 7ms/step Epoch 50/50 122/122 - 1s - loss: 0.0143 - val_loss: 0.0139 - 796ms/epoch - 7ms/step
1095/1095 [==============================] - 2s 1ms/step Test RMSE: 26.571