zl程序教程

您现在的位置是:首页 >  工具

当前栏目

南大《探索数据的奥秘》课件示例代码笔记03

笔记数据代码 探索 示例 03 奥秘 课件
2023-09-14 09:01:29 时间

Chp4-4
2019 年 12 月 17 日

In [7]: #loadtxt
import numpy as np
x=np.loadtxt('C:\Python\Scripts\my_data\global-earthquakes.csv',
delimiter=',')# 注意调用方法时的参数传递语法
print(type(x))
print(x.shape)
print('\n')
print(x[:2,:3]) #ndarray 的二维切取
x_int=np.loadtxt('global-earthquakes.csv',delimiter=',',dtype=int)
print('\n')
print(x_int[:2,:3])
<class 'numpy.ndarray'>
(59209, 8)
[[1.973e+03 1.000e+00 1.000e+00]
[1.973e+03 1.000e+00 1.000e+00]]
[[1973 1 1]
[1973 1 1]]

In [1]: #loadtxt
import numpy as np
x=np.loadtxt('C:\Python\Scripts\my_data\iris.csv',
1delimiter=',')# 注意 Iris 中除了数还有字符串
print(type(x))
print(x.shape)
print('\n')
print(x[:2,:3]) #ndarray 的二维切取
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-1-85eb4be19fe4> in <module>()
1 #loadtxt
2 import numpy as np
----> 3 x=np.loadtxt('C:\Python\Scripts\my_data\iris.csv',
delimiter=',')# 注意 Iris 中除了数还有字符串
4 print(type(x))
5 print(x.shape)
c:\python\lib\site-packages\numpy\lib\npyio.py in loadtxt(fname, dtype, comments,
delimiter, converters, skiprows, usecols, unpack, ndmin, encoding)
1099 # converting the data
1100 X = None
-> 1101 for x in read_data(_loadtxt_chunksize):
1102 if X is None:
1103 X = np.array(x, dtype)
c:\python\lib\site-packages\numpy\lib\npyio.py in read_data(chunk_size)
1026
1027 # Convert each value according to its column and store
-> 1028 items = [conv(val) for (conv, val) in zip(converters, vals)]
1029
1030 # Then pack it according to the dtype's nesting
c:\python\lib\site-packages\numpy\lib\npyio.py in <listcomp>(.0)
1026
1027 # Convert each value according to its column and store
-> 1028 items = [conv(val) for (conv, val) in zip(converters, vals)]
1029
1030 # Then pack it according to the dtype's nesting
c:\python\lib\site-packages\numpy\lib\npyio.py in floatconv(x)
744 if '0x' in x:
745 return float.fromhex(x)
--> 746 return float(x)
747
748 typ = dtype.type
ValueError: could not convert string to float: 'setosa'

In [2]: #loadtxt
import numpy as np
x=np.loadtxt('C:\Python\Scripts\my_data\iris.csv',delimiter=',',
dtype=str) # 强制把所有数据用字符串形式导入
print(type(x))
print(x.shape)
print('\n')
print(x[:2,:3]) #ndarray 的二维切取
<class 'numpy.ndarray'>
(150, 5)
[['5.1' '3.5' '1.4']
['4.9' '3.0' '1.4']]

In [3]: #read_csv
import pandas as pd
data=pd.read_csv('C:\Python\Scripts\my_data\iris.csv',
header=None,
names=['sepal_len','sepal_wid','petal_len',
'petal_wid','target'])
print(type(data))
print('\n')
print(data.head())
print('\n')
<class 'pandas.core.frame.DataFrame'>
sepal_len sepal_wid petal_len petal_wid target
0 5.1 3.5 1.4 0.2 setosa
1 4.9 3.0 1.4 0.2 setosa
2 4.7 3.2 1.3 0.2 setosa
3 4.6 3.1 1.5 0.2 setosa
4 5.0 3.6 1.4 0.2 setosa

In [4]: print(data.values[:2,:5]) # 二维数组访问与常规一样
print(type(data.values))
print('\n')
x=data.values[:,:4]
print(x[:2,:])
print(type(x))
print('\n')
[[5.1 3.5 1.4 0.2 'setosa']
[4.9 3.0 1.4 0.2 'setosa']]
<class 'numpy.ndarray'>
[[5.1 3.5 1.4 0.2]
[4.9 3.0 1.4 0.2]]
<class 'numpy.ndarray'>

In [9]: import urllib
target_page='https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/a1a'
a2a=urllib.request.urlopen(target_page)
from sklearn.datasets import load_svmlight_file
x_train, y_train = load_svmlight_file(a2a) # 这里是 Python 风格的多重赋值
print (x_train.shape, y_train.shape)
print(x_train[:1][:100]) #x_train 是个稀疏矩阵,返回的是矩阵中非零元素的位置
print(type(x_train))
print('\n')
print(y_train[:10])
print(type(y_train))
(1605, 119) (1605,)
(0, 2) 1.0
(0, 10) 1.0
(0, 13) 1.0
(0, 18) 1.0
(0, 38) 1.0
(0, 41) 1.0
(0, 54) 1.0
(0, 63) 1.0
(0, 66) 1.0
(0, 72) 1.0
(0, 74) 1.0
(0, 75) 1.0
(0, 79) 1.0
(0, 82) 1.0
<class 'scipy.sparse.csr.csr_matrix'>
[-1. -1. -1. -1. -1. -1. -1. -1. -1. 1.]
<class 'numpy.ndarray'>

In [51]: # 文本信息结构化示例
from sklearn.datasets import fetch_20newsgroups
my_news=fetch_20newsgroups(categories=['sci.med']) # 下载医学新闻数据集
print(type(my_news),'\n')
print(twenty_sci_news.data[0],'\n')
from sklearn.feature_extraction.text import CountVectorizer
count_vect=CountVectorizer()
word_count=count_vect.fit_transform(my_news.data) # 返回一个稀疏矩阵对象
print(type(word_count))
print(word_count.shape,'\n')
print(word_count[0]) # 第一行上非零元素的坐标(元组表示)以及频次
word_list=count_vect.get_feature_names()
for n in word_count[0].indices:
print(word_list[n],'\t appears ', word_count[0,n],
'times') # 打印第一篇新闻的字频情况
<class 'sklearn.utils.Bunch'>
From: nyeda@cnsvax.uwec.edu (David Nye)
Subject: Re: Post Polio Syndrome Information Needed Please !!!
Organization: University of Wisconsin Eau Claire
Lines: 21
[reply to keith@actrix.gen.nz (Keith Stewart)]
>My wife has become interested through an acquaintance in Post-Polio
>Syndrome This apparently is not recognised in New Zealand and different
>symptons ( eg chest complaints) are treated separately. Does anone have
>any information on it
It would help if you (and anyone else asking for medical information on
some subject) could ask specific questions, as no one is likely to type
in a textbook chapter covering all aspects of the subject. If you are
looking for a comprehensive review, ask your local hospital librarian.
Most are happy to help with a request of this sort.
Briefly, this is a condition in which patients who have significant
residual weakness from childhood polio notice progression of the
weakness as they get older. One theory is that the remaining motor
neurons have to work harder and so die sooner.
David Nye (nyeda@cnsvax.uwec.edu). Midelfort Clinic, Eau Claire WI
This is patently absurd; but whoever wishes to become a philosopher
must learn not to be frightened by absurdities. -- Bertrand Russell
<class 'scipy.sparse.csr.csr_matrix'>
(594, 16257)
(0, 12891) 1
(0, 2742) 1
(0, 1496) 1
(0, 3189) 1
(0, 6580) 1
(0, 2628) 1
(0, 8837) 1
(0, 10048) 1
(0, 11268) 1
(0, 16003) 1
(0, 15944) 1
(0, 3175) 1
(0, 1495) 1
(0, 11052) 1
(0, 15952) 1
(0, 3798) 1
(0, 9701) 1
(0, 13693) 1
(0, 4977) 1
(0, 13621) 1
(0, 7147) 1
(0, 16049) 1
(0, 10245) 1
(0, 9960) 1
(0, 12405) 1
: :
(0, 14822) 6
(0, 12475) 1
(0, 564) 1
(0, 8990) 1
(0, 3757) 2
(0, 5470) 2
(0, 15998) 1
(0, 10599) 4
(0, 15299) 1
(0, 10745) 1
(0, 11417) 1
(0, 10178) 1
(0, 7896) 3
(0, 14389) 2
(0, 11477) 3
(0, 11548) 2
(0, 12157) 1
(0, 14117) 3
(0, 10497) 2
(0, 4661) 2
(0, 5508) 2
(0, 15454) 2
(0, 3842) 2
(0, 10498) 2
(0, 6585) 2
russell appears 1 times
bertrand appears 1 times
absurdities appears 1 times
by appears 1 times
frightened appears 1 times
be appears 1 times
learn appears 1 times
must appears 1 times
philosopher appears 1 times
wishes appears 1 times
whoever appears 1 times
but appears 1 times
absurd appears 1 times
patently appears 1 times
wi appears 1 times
clinic appears 1 times
midelfort appears 1 times
sooner appears 1 times
die appears 1 times
so appears 1 times
harder appears 1 times
work appears 1 times
neurons appears 1 times
motor appears 1 times
remaining appears 1 times
that appears 1 times
theory appears 1 times
older appears 1 times
get appears 1 times
they appears 1 times
progression appears 1 times
notice appears 1 times
childhood appears 1 times
weakness appears 2 times
residual appears 1 times
significant appears 1 times
who appears 1 times
patients appears 1 times
which appears 1 times
condition appears 1 times
briefly appears 1 times
sort appears 1 times
request appears 1 times
with appears 1 times
happy appears 1 times
most appears 1 times
librarian appears 1 times
hospital appears 1 times
local appears 1 times
your appears 1 times
review appears 1 times
comprehensive appears 1 times
looking appears 1 times
the appears 3 times
aspects appears 1 times
all appears 1 times
covering appears 1 times
chapter appears 1 times
textbook appears 1 times
type appears 1 times
likely appears 1 times
one appears 2 times
no appears 1 times
as appears 2 times
questions appears 1 times
specific appears 1 times
ask appears 2 times
could appears 1 times
some appears 1 times
medical appears 1 times
for appears 2 times
asking appears 1 times
else appears 1 times
anyone appears 1 times
you appears 2 times
if appears 2 times
help appears 2 times
would appears 1 times
it appears 2 times
on appears 2 times
any appears 1 times
have appears 3 times
anone appears 1 times
does appears 1 times
separately appears 1 times
treated appears 1 times
are appears 3 times
complaints appears 1 times
chest appears 1 times
eg appears 1 times
symptons appears 1 times
different appears 1 times
and appears 3 times
zealand appears 1 times
new appears 1 times
recognised appears 1 times
not appears 2 times
is appears 5 times
apparently appears 1 times
this appears 4 times
in appears 4 times
acquaintance appears 1 times
an appears 1 times
through appears 1 times
interested appears 1 times
become appears 2 times
has appears 1 times
wife appears 1 times
my appears 1 times
stewart appears 1 times
nz appears 1 times
gen appears 1 times
actrix appears 1 times
keith appears 2 times
to appears 6 times
reply appears 1 times
21 appears 1 times
lines appears 1 times
claire appears 2 times
eau appears 2 times
wisconsin appears 1 times
of appears 4 times
university appears 1 times
organization appears 1 times
please appears 1 times
needed appears 1 times
information appears 3 times
syndrome appears 2 times
polio appears 3 times
post appears 2 times
re appears 1 times
subject appears 3 times
nye appears 2 times
david appears 2 times
edu appears 2 times
uwec appears 2 times
cnsvax appears 2 times
nyeda appears 2 times
from appears 2 times

In [1]: import pandas as pd
my_chunk=pd.read_csv('C:\Python\Scripts\my_data\iris.csv',header=None,
names=['c1','c2','c3','c4','c5'],
chunksize=20)
print(type(my_chunk))
# 注意规定 chunksize 后,返回的数据类型不再是 DataFrame
# 而是一个可迭代的 TextFileReader 对象
# 它保存了若干个 chunk 位置,但只有当被迭代器指到时,才会真正把对应的数据块读入到内存
for n,chunk in enumerate(my_chunk): # 枚举函数既返回元素序号,又返回每个元素
print(chunk.shape)
if n <= 2:
print(chunk) # 每一个 chunk 又是一个 DataFrame
print('\n')
if n <= 2:
print(my_chunk.get_chunk(1),'\n')
#get_chunk 函数是从当前位置起获取指定大小的数据块,返回也是 dataframe
#get_chunk 会改变迭代器指针,体会一下哦
<class 'pandas.io.parsers.TextFileReader'>
(20, 5)
c1 c2 c3 c4 c5
0 5.1 3.5 1.4 0.2 setosa
1 4.9 3.0 1.4 0.2 setosa
2 4.7 3.2 1.3 0.2 setosa
3 4.6 3.1 1.5 0.2 setosa
4 5.0 3.6 1.4 0.2 setosa
5 5.4 3.9 1.7 0.4 setosa
6 4.6 3.4 1.4 0.3 setosa
7 5.0 3.4 1.5 0.2 setosa
8 4.4 2.9 1.4 0.2 setosa
9 4.9 3.1 1.5 0.1 setosa
10 5.4 3.7 1.5 0.2 setosa
11 4.8 3.4 1.6 0.2 setosa
12 4.8 3.0 1.4 0.1 setosa
13 4.3 3.0 1.1 0.1 setosa
14 5.8 4.0 1.2 0.2 setosa
15 5.7 4.4 1.5 0.4 setosa
16 5.4 3.9 1.3 0.4 setosa
17 5.1 3.5 1.4 0.3 setosa
18 5.7 3.8 1.7 0.3 setosa
19 5.1 3.8 1.5 0.3 setosa
c1 c2 c3 c4 c5
20 5.4 3.4 1.7 0.2 setosa
(20, 5)
c1 c2 c3 c4 c5
21 5.1 3.7 1.5 0.4 setosa
22 4.6 3.6 1.0 0.2 setosa
23 5.1 3.3 1.7 0.5 setosa
24 4.8 3.4 1.9 0.2 setosa
25 5.0 3.0 1.6 0.2 setosa
26 5.0 3.4 1.6 0.4 setosa
27 5.2 3.5 1.5 0.2 setosa
28 5.2 3.4 1.4 0.2 setosa
29 4.7 3.2 1.6 0.2 setosa
30 4.8 3.1 1.6 0.2 setosa
31 5.4 3.4 1.5 0.4 setosa
32 5.2 4.1 1.5 0.1 setosa
33 5.5 4.2 1.4 0.2 setosa
34 4.9 3.1 1.5 0.1 setosa
35 5.0 3.2 1.2 0.2 setosa
36 5.5 3.5 1.3 0.2 setosa
37 4.9 3.1 1.5 0.1 setosa
38 4.4 3.0 1.3 0.2 setosa
39 5.1 3.4 1.5 0.2 setosa
40 5.0 3.5 1.3 0.3 setosa
c1 c2 c3 c4 c5
41 4.5 2.3 1.3 0.3 setosa
(20, 5)
c1 c2 c3 c4 c5
42 4.4 3.2 1.3 0.2 setosa
43 5.0 3.5 1.6 0.6 setosa
44 5.1 3.8 1.9 0.4 setosa
45 4.8 3.0 1.4 0.3 setosa
46 5.1 3.8 1.6 0.2 setosa
47 4.6 3.2 1.4 0.2 setosa
48 5.3 3.7 1.5 0.2 setosa
49 5.0 3.3 1.4 0.2 setosa
50 7.0 3.2 4.7 1.4 versicolor
51 6.4 3.2 4.5 1.5 versicolor
52 6.9 3.1 4.9 1.5 versicolor
53 5.5 2.3 4.0 1.3 versicolor
54 6.5 2.8 4.6 1.5 versicolor
55 5.7 2.8 4.5 1.3 versicolor
56 6.3 3.3 4.7 1.6 versicolor
57 4.9 2.4 3.3 1.0 versicolor
58 6.6 2.9 4.6 1.3 versicolor
59 5.2 2.7 3.9 1.4 versicolor
60 5.0 2.0 3.5 1.0 versicolor
61 5.9 3.0 4.2 1.5 versicolor
c1 c2 c3 c4 c5
62 6.0 2.2 4.0 1.0 versicolor
(20, 5)
(20, 5)
(20, 5)
(20, 5)
(7, 5)

In [40]: import numpy as np
import pandas as pd
import csv
with open('C:\Python\Scripts\my_data\iris.csv','r') as my_data_stream:
#with 命令保证后面缩进的命令块执行完毕后文件会关闭
# 用 open 命令以只读方式打开文件,创建的文件对象保存在 my_data_stream 中
# 用 csv.reader 对给定的文件对象读取,一次读取文件中的一行,作为列表对象
# 这里的 reader 返回的是迭代器对象
my_reader=csv.reader(my_data_stream,dialect='excel')
for n,row in enumerate(my_reader):
if n<=5:
print(row) # 可见每个 row 都是一个列表
print(type(row),'\n')
['5.1', '3.5', '1.4', '0.2', 'setosa']
<class 'list'>
['4.9', '3.0', '1.4', '0.2', 'setosa']
<class 'list'>
['4.7', '3.2', '1.3', '0.2', 'setosa']
<class 'list'>
['4.6', '3.1', '1.5', '0.2', 'setosa']
<class 'list'>
['5.0', '3.6', '1.4', '0.2', 'setosa']
<class 'list'>
['5.4', '3.9', '1.7', '0.4', 'setosa']
<class 'list'>

In [31]: import numpy as np
import pandas as pd
import csv
def batch_read(filename, batch=5): # 注意自定义函数的语法
with open(filename,'r') as data_stream:
#with 命令保证后面缩进的命令块执行完毕后文件会关闭
# 用 open 命令以只读方式打开文件,创建的文件对象保存在 data_stream 中
batch_output=list() # 初始化 batch_output 列表
# 用 csv.reader 对给定的文件对象读取,一次读取文件中的一行,作为列表的一个元素
# 这里的 reader 返回的是迭代器对象
for n,row in enumerate(csv.reader(data_stream,dialect='excel')):
# 枚举函数返回包含行号和行内容的元组
#for 循环遍历 reader 返回的列表
if n>0 and n%batch==0:
#yield 属生成器,类似于 return,但可迭代复用,
# 下句把 batch_output 列表转换为 ndarray 返回
yield(np.array(batch_output))
batch_output=list() # 重置 batch_output
batch_output.append(row) # 更新 batch_output
yield(np.array(batch_output)) # 返回最后的几行
for batch_input in batch_read('C:\Python\Scripts\my_data\iris.csv',batch=7):
# 注意允许修改默认参数
print(batch_input)
print('\n')
[['5.1' '3.5' '1.4' '0.2' 'setosa']
['4.9' '3.0' '1.4' '0.2' 'setosa']
['4.7' '3.2' '1.3' '0.2' 'setosa']
['4.6' '3.1' '1.5' '0.2' 'setosa']
['5.0' '3.6' '1.4' '0.2' 'setosa']
['5.4' '3.9' '1.7' '0.4' 'setosa']
['4.6' '3.4' '1.4' '0.3' 'setosa']]
[['5.0' '3.4' '1.5' '0.2' 'setosa']
['4.4' '2.9' '1.4' '0.2' 'setosa']
['4.9' '3.1' '1.5' '0.1' 'setosa']
['5.4' '3.7' '1.5' '0.2' 'setosa']
['4.8' '3.4' '1.6' '0.2' 'setosa']
['4.8' '3.0' '1.4' '0.1' 'setosa']
['4.3' '3.0' '1.1' '0.1' 'setosa']]
[['5.8' '4.0' '1.2' '0.2' 'setosa']
['5.7' '4.4' '1.5' '0.4' 'setosa']
['5.4' '3.9' '1.3' '0.4' 'setosa']
['5.1' '3.5' '1.4' '0.3' 'setosa']
['5.7' '3.8' '1.7' '0.3' 'setosa']
['5.1' '3.8' '1.5' '0.3' 'setosa']
['5.4' '3.4' '1.7' '0.2' 'setosa']]
[['5.1' '3.7' '1.5' '0.4' 'setosa']
['4.6' '3.6' '1.0' '0.2' 'setosa']
['5.1' '3.3' '1.7' '0.5' 'setosa']
['4.8' '3.4' '1.9' '0.2' 'setosa']
['5.0' '3.0' '1.6' '0.2' 'setosa']
['5.0' '3.4' '1.6' '0.4' 'setosa']
['5.2' '3.5' '1.5' '0.2' 'setosa']]
[['5.2' '3.4' '1.4' '0.2' 'setosa']
['4.7' '3.2' '1.6' '0.2' 'setosa']
['4.8' '3.1' '1.6' '0.2' 'setosa']
['5.4' '3.4' '1.5' '0.4' 'setosa']
['5.2' '4.1' '1.5' '0.1' 'setosa']
['5.5' '4.2' '1.4' '0.2' 'setosa']
['4.9' '3.1' '1.5' '0.1' 'setosa']]
[['5.0' '3.2' '1.2' '0.2' 'setosa']
['5.5' '3.5' '1.3' '0.2' 'setosa']
['4.9' '3.1' '1.5' '0.1' 'setosa']
['4.4' '3.0' '1.3' '0.2' 'setosa']
['5.1' '3.4' '1.5' '0.2' 'setosa']
['5.0' '3.5' '1.3' '0.3' 'setosa']
['4.5' '2.3' '1.3' '0.3' 'setosa']]
[['4.4' '3.2' '1.3' '0.2' 'setosa']
['5.0' '3.5' '1.6' '0.6' 'setosa']
['5.1' '3.8' '1.9' '0.4' 'setosa']
['4.8' '3.0' '1.4' '0.3' 'setosa']
['5.1' '3.8' '1.6' '0.2' 'setosa']
['4.6' '3.2' '1.4' '0.2' 'setosa']
['5.3' '3.7' '1.5' '0.2' 'setosa']]
[['5.0' '3.3' '1.4' '0.2' 'setosa']
['7.0' '3.2' '4.7' '1.4' 'versicolor']
['6.4' '3.2' '4.5' '1.5' 'versicolor']
['6.9' '3.1' '4.9' '1.5' 'versicolor']
['5.5' '2.3' '4.0' '1.3' 'versicolor']
['6.5' '2.8' '4.6' '1.5' 'versicolor']
['5.7' '2.8' '4.5' '1.3' 'versicolor']]
[['6.3' '3.3' '4.7' '1.6' 'versicolor']
['4.9' '2.4' '3.3' '1.0' 'versicolor']
['6.6' '2.9' '4.6' '1.3' 'versicolor']
['5.2' '2.7' '3.9' '1.4' 'versicolor']
['5.0' '2.0' '3.5' '1.0' 'versicolor']
['5.9' '3.0' '4.2' '1.5' 'versicolor']
['6.0' '2.2' '4.0' '1.0' 'versicolor']]
[['6.1' '2.9' '4.7' '1.4' 'versicolor']
['5.6' '2.9' '3.6' '1.3' 'versicolor']
['6.7' '3.1' '4.4' '1.4' 'versicolor']
['5.6' '3.0' '4.5' '1.5' 'versicolor']
['5.8' '2.7' '4.1' '1.0' 'versicolor']
['6.2' '2.2' '4.5' '1.5' 'versicolor']
['5.6' '2.5' '3.9' '1.1' 'versicolor']]
[['5.9' '3.2' '4.8' '1.8' 'versicolor']
['6.1' '2.8' '4.0' '1.3' 'versicolor']
['6.3' '2.5' '4.9' '1.5' 'versicolor']
['6.1' '2.8' '4.7' '1.2' 'versicolor']
['6.4' '2.9' '4.3' '1.3' 'versicolor']
['6.6' '3.0' '4.4' '1.4' 'versicolor']
['6.8' '2.8' '4.8' '1.4' 'versicolor']]
[['6.7' '3.0' '5.0' '1.7' 'versicolor']
['6.0' '2.9' '4.5' '1.5' 'versicolor']
['5.7' '2.6' '3.5' '1.0' 'versicolor']
['5.5' '2.4' '3.8' '1.1' 'versicolor']
['5.5' '2.4' '3.7' '1.0' 'versicolor']
['5.8' '2.7' '3.9' '1.2' 'versicolor']
['6.0' '2.7' '5.1' '1.6' 'versicolor']]
[['5.4' '3.0' '4.5' '1.5' 'versicolor']
['6.0' '3.4' '4.5' '1.6' 'versicolor']
['6.7' '3.1' '4.7' '1.5' 'versicolor']
['6.3' '2.3' '4.4' '1.3' 'versicolor']
['5.6' '3.0' '4.1' '1.3' 'versicolor']
['5.5' '2.5' '4.0' '1.3' 'versicolor']
['5.5' '2.6' '4.4' '1.2' 'versicolor']]
[['6.1' '3.0' '4.6' '1.4' 'versicolor']
['5.8' '2.6' '4.0' '1.2' 'versicolor']
['5.0' '2.3' '3.3' '1.0' 'versicolor']
['5.6' '2.7' '4.2' '1.3' 'versicolor']
['5.7' '3.0' '4.2' '1.2' 'versicolor']
['5.7' '2.9' '4.2' '1.3' 'versicolor']
['6.2' '2.9' '4.3' '1.3' 'versicolor']]
[['5.1' '2.5' '3.0' '1.1' 'versicolor']
['5.7' '2.8' '4.1' '1.3' 'versicolor']
['6.3' '3.3' '6.0' '2.5' 'virginica']
['5.8' '2.7' '5.1' '1.9' 'virginica']
['7.1' '3.0' '5.9' '2.1' 'virginica']
['6.3' '2.9' '5.6' '1.8' 'virginica']
['6.5' '3.0' '5.8' '2.2' 'virginica']]
[['7.6' '3.0' '6.6' '2.1' 'virginica']
['4.9' '2.5' '4.5' '1.7' 'virginica']
['7.3' '2.9' '6.3' '1.8' 'virginica']
['6.7' '2.5' '5.8' '1.8' 'virginica']
['7.2' '3.6' '6.1' '2.5' 'virginica']
['6.5' '3.2' '5.1' '2.0' 'virginica']
['6.4' '2.7' '5.3' '1.9' 'virginica']]
[['6.8' '3.0' '5.5' '2.1' 'virginica']
['5.7' '2.5' '5.0' '2.0' 'virginica']
['5.8' '2.8' '5.1' '2.4' 'virginica']
['6.4' '3.2' '5.3' '2.3' 'virginica']
['6.5' '3.0' '5.5' '1.8' 'virginica']
['7.7' '3.8' '6.7' '2.2' 'virginica']
['7.7' '2.6' '6.9' '2.3' 'virginica']]
[['6.0' '2.2' '5.0' '1.5' 'virginica']
['6.9' '3.2' '5.7' '2.3' 'virginica']
['5.6' '2.8' '4.9' '2.0' 'virginica']
['7.7' '2.8' '6.7' '2.0' 'virginica']
['6.3' '2.7' '4.9' '1.8' 'virginica']
['6.7' '3.3' '5.7' '2.1' 'virginica']
['7.2' '3.2' '6.0' '1.8' 'virginica']]
[['6.2' '2.8' '4.8' '1.8' 'virginica']
['6.1' '3.0' '4.9' '1.8' 'virginica']
['6.4' '2.8' '5.6' '2.1' 'virginica']
['7.2' '3.0' '5.8' '1.6' 'virginica']
['7.4' '2.8' '6.1' '1.9' 'virginica']
['7.9' '3.8' '6.4' '2.0' 'virginica']
['6.4' '2.8' '5.6' '2.2' 'virginica']]
[['6.3' '2.8' '5.1' '1.5' 'virginica']
['6.1' '2.6' '5.6' '1.4' 'virginica']
['7.7' '3.0' '6.1' '2.3' 'virginica']
['6.3' '3.4' '5.6' '2.4' 'virginica']
['6.4' '3.1' '5.5' '1.8' 'virginica']
['6.0' '3.0' '4.8' '1.8' 'virginica']
['6.9' '3.1' '5.4' '2.1' 'virginica']]
[['6.7' '3.1' '5.6' '2.4' 'virginica']
['6.9' '3.1' '5.1' '2.3' 'virginica']
['5.8' '2.7' '5.1' '1.9' 'virginica']
['6.8' '3.2' '5.9' '2.3' 'virginica']
['6.7' '3.3' '5.7' '2.5' 'virginica']
['6.7' '3.0' '5.2' '2.3' 'virginica']
['6.3' '2.5' '5.0' '1.9' 'virginica']]
[['6.5' '3.0' '5.2' '2.0' 'virginica']
['6.2' '3.4' '5.4' '2.3' 'virginica']
['5.9' '3.0' '5.1' '1.8' 'virginica']]

抄笔记凑数中......