(MacOS) Tensorflow Object_detection API 安装、配置; RectLabel标注自己的图像并生成TF数据集、训练SSD+Mobilenet

1 下载TF model

from Github
https://github.com/tensorflow/models
download zip and unzip to:
/Users/yourusername/models-master

文中yourusername代表macOS当前用户名,需修改成你自己的

2 环境创建

Anaconda中create新的环境(python3.6)TF_OD

3 Tensorflow Object_Detection API 依赖安装

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md

3.1 Dependencies

依照Github上的Installation说明一项项安装(我的系统是macOS High Sierra)

pip install --user Cython
pip install --user contextlib2
pip install --user pillow
pip install --user lxml
pip install --user jupyter
pip install --user matplotlib

3.2 手动安装 protobuf-compiler

protobuf-compiler 在Mac下可以用brew安装:

brew install protobuf

终端执行protoc --version可以用来检查版本

# From /Users/yourusername/models-master/research/
protoc object_detection/protos/*.proto --python_out=.

3.3 安装COCO API

/Users/yourusername/models-master/research目录下执行:

git clone https://github.com/pdollar/coco.git

cd coco/PythonAPI
make
sudo make install
sudo python setup.py install

before doing above steps install cython

https://github.com/cocodataset/cocoapi

3.4 在 .bashrc 文件中加入环境变量

引用摘自Google GitHub

When running locally, the tensorflow/models/research/ and slim directories should be appended to PYTHONPATH. This can be done by running the following from tensorflow/models/research/:

# From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

Note: This command needs to run from every new terminal you start. If you wish to avoid running this manually, you can add it as a new line to the end of your ~/.bashrc file, replacing pwd with the absolute path of tensorflow/models/research on your system.


打开.bashrc文件在最后添加:

export PYTHONPATH=$PYTHONPATH:/Users/yourusername/models-master/research:/Users/yourusername/models-master/research/slim
 export PYTHONPATH=$PYTHONPATH:/Users/yourusername/models-master/research/coco/PythonAPI

保存之后执行如下指令:

$ source ~/.bashrc

3.5 测试是否正确安装Tensorflow Object Detection API

/Users/yourusername/models-master/research/目录下运行

python3 object_detection/builders/model_builder_test.py

得到以下报错:

Traceback (most recent call last):
  File "object_detection/builders/model_builder_test.py", line 23, in <module>
    from object_detection.builders import model_builder
  File "/Users/yourusername/models-master/research/object_detection/builders/model_builder.py", line 34, in <module>
    from object_detection.meta_architectures import ssd_meta_arch
  File "/Users/yourusername/models-master/research/object_detection/meta_architectures/ssd_meta_arch.py", line 31, in <module>
    from object_detection.utils import visualization_utils
  File "/Users/yourusername/models-master/research/object_detection/utils/visualization_utils.py", line 29, in <module>
    import PIL.Image as Image
ModuleNotFoundError: No module named 'PIL'

试图安装PIL,报错:

(TF_OD) bash-3.2$ pip3 install PIL
Collecting PIL
  Could not find a version that satisfies the requirement PIL (from versions: )
No matching distribution found for PIL

网上查了一下,发现是PIL已经被pillow取代,但googleGitHUB上的代码还是旧的,主要是这个文件:
/Users/yourusername/models-master/research/object_detection/utils/visualization_utils.py

其中有这么几行:

import PIL.Image as Image
import PIL.ImageColor as ImageColor
import PIL.ImageDraw as ImageDraw
import PIL.ImageFont as ImageFont

按照网上的卸载PIL或pillow在重新安装pillow都不行,最后发现代码改写成下面的就OK了:

from PIL import Image
from PIL import ImageColor
from PIL import ImageDraw
from PIL import ImageFont

重新运行

python3 object_detection/builders/model_builder_test.py

得到:

(TF_OD) bash-3.2$ python3 object_detection/builders/model_builder_test.py
................./Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
  if d.decorator_argspec is not None), _inspect.getargspec(target))
.....
----------------------------------------------------------------------
Ran 22 tests in 0.086s

OK

4 准备自己的数据集

/Users/yourusername/PycharmProjects下创建个测试目录TF_OD

此处PycharmProjects可以自定义

将标注好的图像与xml分别放在两个子目录下

--train_set
--val_set

4.1 在/TF_OD/根目录下创建xml to csv 文件

可以面向训练集和测试集的转换各创建了一个文件

TF_OD/train_xml_to_csv.py
TF_OD/val_xml_to_csv.py 

4.1.0 备用

4.1.1 train_xml_to_csv.py 代码

import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET


def xml_to_csv(path):
    xml_list = []
    for xml_file in glob.glob(path + '/*.xml'):
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall('object'):
            value = (path+'/'+root.find('filename').text,
                     int(root.find('size')[0].text),
                     int(root.find('size')[1].text),
                     member[0].text,
                     int(member.find('bndbox')[0].text),
                     int(member.find('bndbox')[1].text),
                     int(member.find('bndbox')[2].text),
                     int(member.find('bndbox')[3].text),
                     )
            xml_list.append(value)
    column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
    xml_df = pd.DataFrame(xml_list, columns=column_name)
    return xml_df


def main():
    image_path = os.path.join(os.getcwd(), 'train_set')
    print(image_path)
    xml_df = xml_to_csv(image_path)
    xml_df.to_csv('labels_train.csv', index=None)
    print('Successfully converted xml to csv.')


if __name__ == '__main__':
    main()

4.1.2 val_xml_to_csv.py

import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET


def xml_to_csv(path):
    xml_list = []
    for xml_file in glob.glob(path + '/*.xml'):
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall('object'):
            value = (path+'/'+root.find('filename').text,
                     int(root.find('size')[0].text),
                     int(root.find('size')[1].text),
                     member[0].text,
                     #int(member[4][0].text),
                     #int(member[4][1].text),
                     #int(member[4][2].text),
                     #int(member[4][3].text),
                     int(member.find('bndbox')[0].text),
                     int(member.find('bndbox')[1].text),
                     int(member.find('bndbox')[2].text),
                     int(member.find('bndbox')[3].text),
                     )
            xml_list.append(value)
    column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
    xml_df = pd.DataFrame(xml_list, columns=column_name)
    return xml_df


def main():
    image_path = os.path.join(os.getcwd(), 'val_set')
    print(image_path)
    xml_df = xml_to_csv(image_path)
    xml_df.to_csv('labels_val.csv', index=None)
    print('Successfully converted xml to csv.')


if __name__ == '__main__':
    main()

4.1.3 train_xml_to_csv ,val_xml_to_csv 中的代码需注意

  1. ‘val_set’ 或 ‘train_set’ 改成存放图片和xml文件的实际目录
image_path = os.path.join(os.getcwd(), 'val_set')
  1. 网上的xml解析代码是这样的,实测发现需要将路径写入文件名,否则会提示找不到文件:
for member in root.findall('object'):
            value = (root.find('filename').text,
                     int(root.find('size')[0].text),

改成下面的

 value = (path+'/'+root.find('filename').text,
  1. 网上的标注框xml解析是这样的:
                     member[0].text,
                     int(member[4][0].text),
                     int(member[4][1].text),
                     int(member[4][2].text),
                     int(member[4][3].text),

实测下来对格式兼容性不好,我用Rectlabel生成的xml就解析不了,于是注释掉上面的,改成下面的,转换成功:

                     member[0].text,
                     #int(member[4][0].text),
                     #int(member[4][1].text),
                     #int(member[4][2].text),
                     #int(member[4][3].text),
                     int(member.find('bndbox')[0].text),
                     int(member.find('bndbox')[1].text),
                     int(member.find('bndbox')[2].text),
                     int(member.find('bndbox')[3].text),

4.2 xml to csv 转换

Anaconda在环境TF_OD中打开终端

进入测试目录TF_OD后运行两个文件:

python3 train_xml_to_csv.py
python3 val_xml_to_csv.py

生成的CSV文件在TF_OD根目录下

labels_train.csv
labels_val.csv

文件结构如下所示:

4.3 创建 generate_tfrecord.py

/Users/yourusername/PycharmProjects/TF_OD目录下创建generate_tfrecord.py:

注意其中class_text_to_int函数部分要根据实际的目标类修改:

###TO-DO replace this with label map
def class_text_to_int(row_label):
    if row_label == 'hualiao':
        return 1
    elif row_label == 'liantong':
        return 2
    elif row_label == 'Null':
        return 3
    elif row_label == 'line':
        return 4
    else:
        return 0

hualiao、liantong、Null、line是我项目里定义的4个类别

"""
Usage:
  #From tensorflow/models/
  #Create train data:
  python generate_tfrecord.py --csv_input=labels_train.csv  --output_path=train.record
  #Create test data:
  python generate_tfrecord.py --csv_input=labels_val.csv  --output_path=test.record
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import

import os
import io
import pandas as pd
import tensorflow as tf

from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict

flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
flags.DEFINE_string('image_dir', '', 'Path to images')
FLAGS = flags.FLAGS


###TO-DO replace this with label map
def class_text_to_int(row_label):
    if row_label == 'hualiao':
        return 1
    elif row_label == 'liantong':
        return 2
    elif row_label == 'Null':
        return 3
    elif row_label == 'line':
        return 4
    else:
        None


def split(df, group):
    data = namedtuple('data', ['filename', 'object'])
    gb = df.groupby(group)
    return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]


def create_tf_example(group, path):
    with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
        encoded_jpg = fid.read()
    encoded_jpg_io = io.BytesIO(encoded_jpg)
    image = Image.open(encoded_jpg_io)
    width, height = image.size

    filename = group.filename.encode('utf8')
    image_format = b'jpg'
    xmins = []
    xmaxs = []
    ymins = []
    ymaxs = []
    classes_text = []
    classes = []

    for index, row in group.object.iterrows():
        xmins.append(row['xmin'] / width)
        xmaxs.append(row['xmax'] / width)
        ymins.append(row['ymin'] / height)
        ymaxs.append(row['ymax'] / height)
        classes_text.append(row['class'].encode('utf8'))
        classes.append(class_text_to_int(row['class']))

    tf_example = tf.train.Example(features=tf.train.Features(feature={
        'image/height': dataset_util.int64_feature(height),
        'image/width': dataset_util.int64_feature(width),
        'image/filename': dataset_util.bytes_feature(filename),
        'image/source_id': dataset_util.bytes_feature(filename),
        'image/encoded': dataset_util.bytes_feature(encoded_jpg),
        'image/format': dataset_util.bytes_feature(image_format),
        'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
        'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
        'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
        'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
        'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
        'image/object/class/label': dataset_util.int64_list_feature(classes),
    }))
    return tf_example


def main(_):
    writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
    path = os.path.join(FLAGS.image_dir)
    examples = pd.read_csv(FLAGS.csv_input)
    grouped = split(examples, 'filename')
    for group in grouped:
        tf_example = create_tf_example(group, path)
        writer.write(tf_example.SerializeToString())

    writer.close()
    output_path = os.path.join(os.getcwd(), FLAGS.output_path)
    print('Successfully created the TFRecords: {}'.format(output_path))


if __name__ == '__main__':
    tf.app.run()

4.3 用 generate_tfrecord.py 生成records

/Users/yourusername/PycharmProjects/TF_OD目录下执行:

(TF_OD) bash-3.2$ python generate_tfrecord.py --csv_input=labels_train.csv  --output_path=train.record

得到

Successfully created the TFRecords: /Users/yourusername/PycharmProjects/TF_OD/train.record

执行:

(TF_OD) bash-3.2$ python generate_tfrecord.py --csv_input=labels_val.csv  --output_path=test.record

得到

Successfully created the TFRecords: /Users/yourusername/PycharmProjects/TF_OD/test.record

ps.如果遇到报错:No module named 'object_detection'
/Users/yourusername/models-master/research目录下执行:

python setup.py install

5 训练目录设置

5.1 编写目标类型 .pbtxt 文件

输入 vim object-detection.pbtxt 打开一个空文件,在文件夹里写入识别的种类,格式如下

item {
  id: 1
  name: 'hualiao'
}

item {
  id: 2
  name: 'liantong'
}

item {
  id: 3
  name: 'Null'
}

item {
  id: 4
  name: 'line'
}

5.2 目录创建

在/TF_OD/下创建子目录datamodel,如下

/Users/yourusername/PycharmProjects/TF_OD
        +data
          -label_map file
          -train TFRecord file
          -eval TFRecord file
        +models
          + model
            -pipeline config file
            +train
            +eval

将label_map、train/val record文件放至data目录下:

下载ssd_mobilenet_v1_coco_11_06_2017.tar.gz
解压至/Users/yourusername/PycharmProjects/TF_OD/models/model目录下

Inside the un-tar’ed directory, you will find:

a graph proto (graph.pbtxt)

a checkpoint (model.ckpt.data-00000-of-00001, model.ckpt.index, model.ckpt.meta)

a frozen graph proto with weights baked into the graph as constants (frozen_inference_graph.pb) to be used for out of the box inference (try this out in the Jupyter notebook!)

a config file (pipeline.config) which was used to generate the graph. These directly correspond to a config file in the samples/configs) directory but often with a modified score threshold. In the case of the heavier Faster R-CNN models, we also provide a version of the model that uses a highly reduced number of proposals for speed.
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

5.3 网络配置cfg修改

下载ssd——mobilenet对应的cfg文件:
https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v1_coco.config

另存为

/Users/yourusername/PycharmProjects/TF_OD/models/model/pipeline.config

接下来对文件采取如下修改:

*修改方法引用自https://blog.csdn.net/zong596568821xp/article/details/82015126
1.训练类别数更改;
2.验证阶段图片数量(视具体情况);
3.训练,验证,标签路径更改;(其它配置文件的修改方法类似,也基本上是改这几处位置)

  • num_classes:4 #看到有地方说要设置实际种类数+1,这块还需要再确认一下
  • fine_tune_checkpoint:”ssd_mobilenet_v1_coco_11_06_2017/model.ckpt”
  • num_steps:2000 #训练步数设置,根据自己数据量来设置,默认为200000
  • train_input_reader/input_path:”data/train.record”
  • train_input_reader/label_map_path:”training/object-detection.pbtxt”
  • num_examples:56(测试集图片数)
  • num_visualizations:78(测试时显示的图片数,默认为10张)
  • #max_evals:10(注释掉)
  • eval_input_reader/inputpath:”data/test.record”
  • eval_input_reader/label_map_path: “training/object-detection.pbtxt”
    修改完后保存退出*

可参看我的配置文件如下:

# SSD with Mobilenet v1 configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.

model {
  ssd {
    num_classes: 4
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
      }
    }
    image_resizer {
      fixed_shape_resizer {
        height: 300
        width: 300
      }
    }
    box_predictor {
      convolutional_box_predictor {
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.8
        kernel_size: 1
        box_code_size: 4
        apply_sigmoid_to_scores: false
        conv_hyperparams {
          activation: RELU_6,
          regularizer {
            l2_regularizer {
              weight: 0.00004
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.03
              mean: 0.0
            }
          }
          batch_norm {
            train: true,
            scale: true,
            center: true,
            decay: 0.9997,
            epsilon: 0.001,
          }
        }
      }
    }
    feature_extractor {
      type: 'ssd_mobilenet_v1'
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          truncated_normal_initializer {
            stddev: 0.03
            mean: 0.0
          }
        }
        batch_norm {
          train: true,
          scale: true,
          center: true,
          decay: 0.9997,
          epsilon: 0.001,
        }
      }
    }
    loss {
      classification_loss {
        weighted_sigmoid {
        }
      }
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.99
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 0
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    normalize_loss_by_num_matches: true
    post_processing {
      batch_non_max_suppression {
        score_threshold: 1e-8
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
  }
}

train_config: {
  batch_size: 24
  optimizer {
    rms_prop_optimizer: {
      learning_rate: {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.004
          decay_steps: 800720
          decay_factor: 0.95
        }
      }
      momentum_optimizer_value: 0.9
      decay: 0.9
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "/Users/yourusername/PycharmProjects/TF_OD/models/model/model.ckpt"
  from_detection_checkpoint: true
  # Note: The below line limits the training process to 200K steps, which we
  # empirically found to be sufficient enough to train the pets dataset. This
  # effectively bypasses the learning rate schedule (the learning rate will
  # never decay). Remove the below line to train indefinitely.
  num_steps: 2000
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "/Users/yourusername/PycharmProjects/TF_OD/data/train.record"
  }
  label_map_path: "/Users/yourusername/PycharmProjects/TF_OD/data/object-detection.pbtxt"
}

eval_config: {
  num_examples: 56
  # Note: The below line limits the evaluation process to 10 evaluations.
  # Remove the below line to evaluate indefinitely.
  # max_evals: 10
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "/Users/yourusername/PycharmProjects/TF_OD/data/test.record"
  }
  label_map_path: "/Users/yourusername/PycharmProjects/TF_OD/data/object-detection.pbtxt"
  shuffle: false
  num_readers: 1
}

6 训练

6.1 训练之前:

https://www.cnblogs.com/zongfa/p/9663649.html

  1. 添加 tf.logging.set_verbosity(tf.logging.INFO)到model_main.py 的 import 区域之后,会每隔一百个step输出loss;
  2. 如果是python3训练,添加list() 到 model_lib.py的大概390行 category_index.values()变成: list(category_index.values()),否则会有 can't pickle dict_values ERROR出现
  3. 还有一个问题是,用model_train.py 训练时,因为它把老版本的train.py和eval.py集合到了一起,所以制定eval num时指定不好会有warning出现,就像:
WARNING:tensorflow:Ignoring ground truth with image id 558212937 since it was previously added

所以在config文件设置时,eval部分的 num_examples (如下)和 运行设置参数–num_eval_steps 时任何一个值只要比你数据集中训练图片数目要大就会出现警告,因为它没那么多图片来评估,所以这两个值直接设置成训练图片数量就好了。

eval_config: {
  num_examples: 200
  #Note: The below line limits the evaluation process to 10 evaluations.
  #Remove the below line to evaluate indefinitely.
  max_evals: 10
}> 

6.2 训练:

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md

Running the Training Job

A local training job can be run with the following command:

>#From the tensorflow/models/research/ directory
    PIPELINE_CONFIG_PATH={path to pipeline config file}
    MODEL_DIR={path to model directory}
    NUM_TRAIN_STEPS=50000
    SAMPLE_1_OF_N_EVAL_EXAMPLES=1
    python object_detection/model_main.py \
        --pipeline_config_path=${PIPELINE_CONFIG_PATH} \
        --model_dir=${MODEL_DIR} \
        --num_train_steps=${NUM_TRAIN_STEPS} \
        --sample_1_of_n_eval_examples=$SAMPLE_1_OF_N_EVAL_EXAMPLES \
        --alsologtostderr

where ${PIPELINE_CONFIG_PATH} points to the pipeline config and ${MODEL_DIR} points to the directory in which training checkpoints and events will be written to. Note that this binary will interleave both training and evaluation.

/Users/yourusername/models-master/research/object_detection目录下运行

python model_main.py --pipeline_config_path=/Users/yourusername/PycharmProjects/TF_OD/models/model/pipeline.config --model_dir=/Users/yourusername/PycharmProjects/TF_OD/models/model --num_train_steps=2000 --sample_1_of_n_eval_examples=1 --alsologtostderr

6.3 导出checkpoint为pb

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md#exporting-a-trained-model-for-inference

python export_inference_graph.py --input_type=image_tensor --pipeline_config_path=/Users/yourusername/PycharmProjects/TF_OD/model/pipeline.config --trained_checkpoint_prefix=/Users/yourusername/PycharmProjects/TF_OD/model/model.ckpt-10905 --output_directory=/Users/yourusername/PycharmProjects/TF_OD/export_model

7 训练时报的各种错和解决方法

7.1 ImportError: No module named nets

/Users/yourusername/models-master/research/slim 目录下运行

python setup.py build
python setup.py install

若提示error: could not create 'build',原因是git clone下来的代码库中有个BUILD文件,而build和install指令需要新建build文件夹,名字冲突导致问题。将该BUILD文件移动到其他目录,再运行上述指令,即可。

参考链接:https://stackoverflow.com/questions/45036496/tensorflow-object-detection-importerror-no-module-named-nets

7.2 TypeError: non_max_suppression() got an unexpected keyword argument ‘score_threshold’

解决方式:
将tensorflow升级至1.10.0

7.3 ImportError: No module named ‘pycocotools’

coco API安装问题,参见文章开头COCO API安装
pip install pycocotools

7.4 TypeError: name must be string, given: 0

以下两个文件存在Python2 to 3 的兼容性问题
* /Users/yourusername/models-master/research/object_detection/model_lib.py

  • /Applications/anaconda3/envs/TF_OD/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/model_lib.py

需对上面两个文件中以下部分

 eval_specs.append(
        tf.estimator.EvalSpec(
            name=eval_spec_name,
            input_fn=eval_input_fn,
            steps=None,
            exporters=exporter))

进行修改:

 eval_specs.append(
        tf.estimator.EvalSpec(
            name=str(eval_spec_name),
            input_fn=eval_input_fn,
            steps=None,
            exporters=exporter))

7.5 TypeError: can’t pickle dict_values objects

  • /Users/yourusername/models-master/research/object_detection/model_lib.py

  • /Applications/anaconda3/envs/TF_OD/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/model_lib.py

两个文件中的 category_index.values() 改成: list(category_index.values()

https://yq.aliyun.com/articles/630375
https://github.com/tensorflow/models/issues/4780

7.6 ‘dict’ object has no attribute ‘itervalues’

  • /Users/yourusername/models-master/research/object_detection/model_lib.py

  • /Applications/anaconda3/envs/TF_OD/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/model_lib.py

两个文件中的itervalues()修改为 values().

7.7 matplotlib.use() must be called before pylab

UserWarning: This call to matplotlib.use() has no effect because the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time.

/Users/yourusername/models-master/research/object_detection/model_main.py 文件中的import matplotlib;matplotlib.use('Agg')前移到model_main.py最开始

然后把‘/Users/yourusername/models-master/research/object_detection/utils/visualization_utils.py’中的对应这句注释掉


最终本文实现的各种库及环境版本:

Package                  Version  
------------------------ ---------
absl-py                  0.4.0    
alabaster                0.7.12   
appnope                  0.1.0    
asn1crypto               0.24.0   
astor                    0.7.1    
astroid                  2.0.4    
attrs                    18.1.0   
Automat                  0.7.0    
Babel                    2.6.0    
backcall                 0.1.0    
bleach                   2.1.3    
certifi                  2018.8.13
cffi                     1.11.5   
chardet                  3.0.4    
cloudpickle              0.6.1    
constantly               15.1.0   
contextlib2              0.5.5    
cryptography             2.3.1    
cycler                   0.10.0   
Cython                   0.29.2   
decorator                4.3.0    
docutils                 0.14     
entrypoints              0.2.3    
gast                     0.2.0    
grpcio                   1.12.1   
h5py                     2.8.0    
html5lib                 1.0.1    
hyperlink                18.0.0   
idna                     2.7      
imagesize                1.1.0    
imutils                  0.5.1    
incremental              17.5.0   
ipykernel                4.8.2    
ipython                  6.5.0    
ipython-genutils         0.2.0    
ipywidgets               7.4.2    
isort                    4.3.4    
jedi                     0.12.1   
Jinja2                   2.10     
jsonschema               2.6.0    
jupyter                  1.0.0    
jupyter-client           5.2.3    
jupyter-console          5.2.0    
jupyter-core             4.4.0    
Keras                    2.2.4    
Keras-Applications       1.0.6    
Keras-Preprocessing      1.0.5    
keyring                  17.0.0   
kiwisolver               1.0.1    
labelImg                 1.8.1    
lazy-object-proxy        1.3.1    
lxml                     4.2.5    
Markdown                 2.6.11   
MarkupSafe               1.0      
matplotlib               2.2.3    
mccabe                   0.6.1    
mistune                  0.8.3    
mkl-fft                  1.0.4    
mkl-random               1.0.1    
nbconvert                5.3.1    
nbformat                 4.4.0    
notebook                 5.7.4    
numpy                    1.15.4   
numpydoc                 0.8.0    
object-detection         0.1      
olefile                  0.46     
packaging                18.0     
pandas                   0.23.4   
pandocfilters            1.4.2    
parso                    0.3.1    
pexpect                  4.6.0    
pickleshare              0.7.4    
Pillow                   5.3.0    
pip                      18.1     
prometheus-client        0.3.1    
prompt-toolkit           1.0.15   
protobuf                 3.6.1    
psutil                   5.4.8    
ptyprocess               0.6.0    
pycocotools              2.0      
pycodestyle              2.4.0    
pycparser                2.19     
pyflakes                 2.0.0    
Pygments                 2.2.0    
pylint                   2.1.1    
pyOpenSSL                18.0.0   
pyparsing                2.2.0    
PySimpleGUI              3.14.1   
PySocks                  1.6.8    
python-dateutil          2.7.3    
pytz                     2018.5   
PyYAML                   3.13     
pyzmq                    17.1.2   
QtAwesome                0.5.3    
qtconsole                4.4.3    
QtPy                     1.5.2    
requests                 2.21.0   
rope                     0.11.0   
scipy                    1.1.0    
Send2Trash               1.5.0    
setuptools               40.0.0   
simplegeneric            0.8.1    
six                      1.11.0   
slim                     0.1      
snowballstemmer          1.2.1    
Sphinx                   1.8.2    
sphinxcontrib-websupport 1.1.0    
spyder                   3.3.2    
spyder-kernels           0.2.6    
tensorboard              1.12.1   
tensorflow               1.12.0   
termcolor                1.1.0    
terminado                0.8.1    
testpath                 0.3.1    
tornado                  5.1      
traitlets                4.3.2    
Twisted                  17.5.0   
typed-ast                1.1.0    
urllib3                  1.24.1   
wcwidth                  0.1.7    
webencodings             0.5.1    
Werkzeug                 0.14.1   
wheel                    0.31.1   
widgetsnbextension       3.4.2    
wrapt                    1.10.11  
zope.interface           4.5.0   

发表评论

电子邮件地址不会被公开。 必填项已用*标注