Python实现长短记忆神经网络(LSTM)预测极限学习机(ELM)残差并累加的...

Python 实现长短记忆神经⽹络(LSTM )预测极限学习机(ELM )残差并累加的
时间序列预测
本实验使⽤环境为Anaconda3 Jupyter,调⽤Sklearn包,调⽤keras包,请提前准备好。
1.引⼊⼀些常见包
主要有包、numpy包、metrics包、pandas包等。
2.引⼊数据
其中data为全部数据、traffic_feature为特征集、traffic_target⽬标集
对数据进⾏分割,本⽂使⽤70%为分割点。import  csv import  numpy as  np import  time from  sklearn .preprocessing import  StandardScaler from  sklearn .model_selection import  train_test_split from  sklearn .ensemble import  RandomForestClassifier from  sklearn .metrics import  accuracy_score from  sklearn .metrics import  confusion_matrix from  sklearn .metrics import  classification_report from  sklearn .metrics import  explained_variance_score from  sklearn import  metrics from  sklearn .svm import  SVR import  matplotlib .pyplot as  plt  from  pandas import  DataFrame from  pandas import  Se
ries from  pandas import  concat from  pandas import  read_csv from  pandas import  datetime from  sklearn .metrics import  mean_squared_error from  sklearn .preprocessing import  MinMaxScaler from  keras .models import  Sequential from  keras .layers import  Dense from  keras .layers import  LSTM from  math import  sqrt from  matplotlib import  pyplot import  numpy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26data =[]traffic_feature =[]traffic_target =[]csv_file = csv .reader (open ('GoodData.csv'))for  content in  csv_file :    content =list (map (float ,content ))    if  len (content )!=0:        data .append (content )        traffic_feature .append (content [0:4])        traffic_target .append (content [-1])        traffic_feature =np .array (traffic_feature )traffic_target =np .array (traffic_target )data =np .array (data )
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
对后30%的⽬标值继续分割,分割点仍然为70%,预留做对照。
3.数据标准化
使⽤StandardScaler()⽅法将特征数据标准化归⼀化。
4.使⽤ELM 算法预测feature_train =traffic_feature [0:int (len (traffic_feature )*0.7)]feature_test =traffic_feature [int (len (traffic_feature )*0.7):int (len (traffic_feature ))]target_train =traffic_target [0:int (len (traffic_target )*0.7)]target_test =traffic_target [int (len (traffic_target )*0.7):int (len (traffic_target ))]1
2
3
4
5target_test_last =target_test [int (len (target_test )*0.7):int (len (target_test ))]
1scaler = StandardScaler () # 标准化转换scaler .fit (traffic_feature )  # 训练标准化对象traffic_feature = scaler .transform (traffic_feature )  # 转换数据集
1
2
3
class  HiddenLayer :    def  __init__(self , x , num ):  # x :输⼊矩阵  num :隐含层神经元个数        row = x .shape [0]        columns = x .shape [1]        rnd = np .random .RandomState (9999)        self .w = rnd .uniform (-1, 1, (columns , num ))  #        self .b = np .zeros ([row , num ], dtype =float )  # 随机设定隐含层神经元阈值,即bi 的值        for  i in  range (num ):            rand_b = rnd .uniform (-0.4, 0.4)  # 随机产⽣-0.4 到 0.4 之间的数            for  j in  range (row ):                self .b [j , i ] = rand_b  # 设定输⼊层与隐含层的连接权值        self .h = self .sigmoid (np .dot (x , self .w ) + self .b )  # 计算隐含层输出矩阵H        self .H_ = np .linalg .pinv (self .h )  # 获取H 的逆矩阵        # print(self.H_.shape)    # 定义激活函数g(x) ,需为⽆限可微函数    def  sigmoid (self , x ):        print (x )        return  1.0 / (1 + np .
exp (-x ))    '''  若进⾏分析的训练数据为回归问题,则使⽤该⽅式 ,计算隐含层输出权值,即β '''    def  regressor_train (self , T ):        C = 2        I = len (T )        sub_former = np .dot (np .transpose (self .h ), self .h ) + I / C        all_m = np .dot (np .linalg .pinv (sub_former ), np .transpose (self .h ))        T = T .reshape (-1, 1)        self .beta = np .dot (all_m , T )        return  self .beta      """          计算隐含层输出权值,即β    """    def  classifisor_train (self , T ):        en_one = OneHotEncoder ()        # print(T)        T = en_one .fit_transform (T .reshape (-1, 1)).toarray ()  # 独热编码之后⼀定要⽤toarray()转换成正常的数组        # print(T)        C = 3        I = len (T )        sub_former = np .dot (np .transpose (self .h ), self .h ) + I / C        all_m = np .dot (np .linalg .pinv (sub_former ), np .transpose (self .h ))        self .beta = np .dot (all_m , T )        return  self .beta      def  regressor_test (self , test_x ):        b_row = test_x .shape [0]        h = self .sigmoid (np .dot (test_x , self .w ) + self .b [:b_row , :])        result = np .dot (h , self .beta )        return  result      def  classifisor_test (self , test_x ):        b_row = test_x .shape [0]        h = self .sigmoid (np .dot (test_x , self .w ) + self .b [:b_row , :])        result = np .dot (h , self .beta )        result = [item .tolist ().index (max (item .tolist ())) for  item in  result ]        return  result
1
2
3
4
5
6
7
8
9
10
11
神经网络预测12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
跑起来~
设置神经元个数为8,可以⾃⾏调优。
结果如下:
EVS等于0.8705
5.使⽤真实值减预测值,获的ELM 的残差import  matplotlib .pyplot as  plt from  sklearn .metrics import  explained_variance_score a = HiddenLayer (feature_train ,8)a .regressor_train (target_train )result = a .regressor_test (feature_test )plt .plot (result )#测试数组plt .plot (target_test )#测试数组plt .legend (['ELM','TRUE'])fig = plt .gcf ()fig .set_size_inches (18.5, 10.5)plt .title ("ELM")  # 标题plt .show ()print ("EVS:",explained_variance_score (target_test ,result ))1
2
3
4
5
6
7
8
9
10
11
12
13
残差值变化如下:
将预测的后30%截取,预留做对照。
6.对残差值使⽤长短记忆(LSTM )预测
使⽤残差值的前70%作为测试集,使⽤后30%作为验证集。a =[]#真实值for  i in  target_test :    a .append (i )b =[]#预测值for  i in  result :    b .append (i [0])c =[]#残差值num =[]for  inx ,i in  enumerate (a ):    c .append (b [inx ]-i )    num .append (inx )plt .plot (c )#残差fig = plt .gcf ()fig .set_size_inches (18.5,5)plt .xlim (0,1560)plt .title ("Residual Signal")  # 标题plt .show ()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18result_last =b [int (len (b )*0.7):int (len (b ))]
1

本文发布于:2024-09-21 10:37:37,感谢您对本站的认可!

本文链接:https://www.17tex.com/tex/2/360890.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:隐含   数据   预测   分割   训练   标准化   输出   计算
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议