%pylab inline
from keras.datasets import mnist
import mxnet as mx
from mxnet import nd
from mxnet import autograd 
import random
from mxnet import gluon

(x_train, y_train), (x_test, y_test) = mnist.load_data()
num_examples = x_train.shape[0]
num_inputs = x_train.shape[1] * x_train.shape[2]
batch_size = 64
Populating the interactive namespace from numpy and matplotlib


C:\Anaconda3\lib\site-packages\h5py\__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using CNTK backend
C:\Anaconda3\lib\site-packages\keras\backend\cntk_backend.py:21: UserWarning: CNTK backend warning: GPU is not detected. CNTK's CPU version is not fully optimized,please run with GPU to get better performance.
  'CNTK backend warning: GPU is not detected. '

1. 自定义数据迭代器

def data_iter1(X, Y, batch_size):
    num_samples = X.shape[0]
    idx = list(range(num_samples))
    random.shuffle(idx)
    
    X = nd.array(X)
    Y = nd.array(Y)
    for i in range(0, num_examples, batch_size):
        j = nd.array(idx[i: min(i + batch_size, num_examples)])
        yield nd.take(X, j), nd.take(Y, j)

2. Gluon 迭代器

dataset = gluon.data.ArrayDataset(x_train, y_train)
data_iter = gluon.data.DataLoader(dataset, batch_size, shuffle=True)

3. 从迭代器中获取数据

for data, label in data_iter:
    print(data.shape, label.shape)
    break
(64, 28, 28) (64,)
for data, label in data_iter1(x_train, y_train, batch_size):
    print(data.shape, label.shape)
    break
(64, 28, 28) (64,)

版权声明:本文为q735613050原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/q735613050/p/8367173.html