智能鸡舍(yolov5-lite 部署在树莓派)

本贴最后更新于 236 天前,其中的信息可能已经水流花落

本文环境(yolov5-lite 1.4 版本、ncnn 20210525 版本、numpy 1.21.6)已经配置完成,发布在 docker hub。对 docker 了解的用户,可以直接拉取镜像,跳过所有环境配置步骤。

docker pull 233zss/yolov5-lite:v1.4

YOLOV5-lite

数据集

下载路径 https://universe.roboflow.com/object-detection/chicken-jmyni

如需自己采集和标注,请参考 https://blog.csdn.net/black_sneak/article/details/131374492

训练

在远程服务器,基于 docker 进行训练

docker 安装和配置请参考 Docker 添加用户组1,在服务器终端依次执行

docker pull pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel
docker run -itd --gpus all -v /media/data/zhangshanshan/yolov5-lite/:/code --name yolov5-lite pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel bash
docker exec -it yolov5-lite bash#此时进入后是root
apt update
apt install -y libgl1-mesa-glx libglib2.0-0 git wget cmake libopencv-dev protobuf-compiler libprotobuf-dev
wget https://github.com/ppogg/YOLOv5-Lite/releases/download/v1.4/YOLOv5-Lite-1.4.zip
unzip YOLOv5-Lite-1.4.zip
cd YOLOv5-Lite
pip install -r requirements.txt

接下来,需要根据,修改训练文件 train.py。例如当出现以下错误时,可以将 train.py 中的--workers 设置为 1,减小--batch-size 等

RuntimeError: DataLoader worker (pid 23323) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

我的修改示例如下:

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default='weights/v5Lite-s.pt', help='initial weights path')
    parser.add_argument('--cfg', type=str, default='models/v5Lite-s.yaml', help='model.yaml path')
    parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path')
    parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path')
    parser.add_argument('--epochs', type=int, default=300)
    parser.add_argument('--batch-size', type=int, default=128, help='total batch size for all GPUs')
    parser.add_argument('--img-size', nargs='+', type=int, default=[416, 416], help='[train, test] image sizes')
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--notest', action='store_true', help='only test final epoch')
    parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
    parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    parser.add_argument('--device', default='1', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
    parser.add_argument('--workers', type=int, default=1, help='maximum number of dataloader workers')
    parser.add_argument('--project', default='runs/train', help='save to project/name')
    parser.add_argument('--entity', default=None, help='W&B entity')
    parser.add_argument('--name', default='exp', help='save to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--quad', action='store_true', help='quad dataloader')
    parser.add_argument('--linear-lr', action='store_true', help='linear LR')
    parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
    parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
    parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
    parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
    parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
    opt = parser.parse_args()

将预训练 v5lite-e.pt 文件放入 models 文件夹,将数据集文件夹,存放 YOLOv5-Lite 同级文件夹,如下所示

.
|-- YOLOv5-Lite
|   |-- LICENSE
|   |-- README.md
|   |-- __pycache__
|   |   `-- test.cpython-38.pyc
|   |-- android_demo
|   |   `-- ncnn-android-v5lite
|   |-- cpp_demo
|   |   |-- mnn
|   |   |-- ncnn
|   |   |-- ort
|   |   |-- tengine
|   |   `-- tensorrt
|   |-- data
|   |   |-- argoverse_hd.yaml
|   |   |-- coco.yaml
|   |   |-- coco128.yaml
|   |   |-- hyp.finetune.yaml
|   |   |-- hyp.scratch.yaml
|   |   |-- person.yaml
|   |   `-- voc.yaml
|   |-- detect.py
|   |-- export.py
|   |-- models
|   |   |-- __init__.py
|   |   |-- __pycache__
|   |   |-- common.py
|   |   |-- experimental.py
|   |   |-- hub
|   |   |-- v5Lite-c.yaml
|   |   |-- v5Lite-e.yaml
|   |   |-- v5Lite-g.yaml
|   |   |-- v5Lite-s.yaml
|   |   |-- v5lite-e.pt
|   |   `-- yolo.py
|   |-- python_demo
|   |   |-- onnxruntime
|   |   |-- openvino
|   |   `-- tensorrt
|   |-- requirements.txt
|   |-- runs
|   |   `-- train
|   |-- scripts
|   |   |-- Grad_Cam.py
|   |   |-- __init__.py
|   |   |-- autoanchor.py
|   |   |-- check.py
|   |   |-- coco2voc.py
|   |   |-- eval.py
|   |   |-- get_argoverse_hd.sh
|   |   |-- get_coco.sh
|   |   |-- get_voc.sh
|   |   |-- main.py
|   |   |-- make_Txt.py
|   |   |-- rep_convert.py
|   |   `-- voc_label.py
|   |-- test.py
|   |-- train.py
|   `-- utils
|       |-- __init__.py
|       |-- __pycache__
|       |-- activations.py
|       |-- autoanchor.py
|       |-- aws
|       |-- datasets.py
|       |-- general.py
|       |-- google_app_engine
|       |-- google_utils.py
|       |-- loss.py
|       |-- metrics.py
|       |-- plots.py
|       |-- torch_utils.py
|       `-- wandb_logging
`-- dataset
    |-- README.dataset.txt
    |-- README.roboflow.txt
    |-- data.yaml
    |-- test
    |   |-- images
    |   `-- labels
    |-- train
    |   |-- images
    |   |-- labels
    |   `-- labels.cache
    `-- valid
        |-- images
        |-- labels
        `-- labels.cache

36 directories, 52 files

开始训练(路径请根据实际情况修改,我为了测试,只训练了两轮)

cd YOLOv5-Lite
python train.py --data /code/dataset/data.yaml  --cfg v5Lite-e.yaml --weights models/v5lite-e.pt --batch-size 64 

​​image​​

训练结束会得到如下:

​​image​​

量化

onnx

在服务器终端执行(路径请根据实际情况修改)

pip install numpy==1.21.6
pip install onnx
pip install onnx-simplifier
cd YOLOv5-Lite
export PYTHONPATH="$PWD" && python export.py --weights /code/YOLOv5-Lite/runs/train/exp3/weights/best.pt  --img 416 --batch 1

​​​image​​​

进一步简化模型

python -m onnxsim /code/YOLOv5-Lite/runs/train/exp3/weights/best.onnx /code/YOLOv5-Lite/runs/train/exp3/weights/e.onnx

​​​image​​

onnx 模型已经可以直接使用,理论上速度会略逊于 ncnn。用官方的代码测试一下,先参考/code/YOLOv5-Lite/python_demo/onnxruntime/coco.names 写个自己的 data.names,如下:

image​​

小改 ort.py 文件(源代码有点问题),模型路径、测试图片路径、names 路径、是否窗口显示等请根据自己的情况更改

import cv2
import time
import numpy as np
import argparse
import onnxruntime as ort

class yolov5_lite():
    def __init__(self, model_pb_path, label_path, confThreshold=0.1, nmsThreshold=0.1, objThreshold=0.2):
        so = ort.SessionOptions()
        so.log_severity_level = 3
        self.net = ort.InferenceSession(model_pb_path, so)
        self.classes = list(map(lambda x: x.strip(), open(label_path, 'r').readlines()))
        self.num_classes = len(self.classes)
        anchors = [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]]
        self.nl = len(anchors)
        self.na = len(anchors[0]) // 2
        self.no = self.num_classes + 5
        self.grid = [np.zeros(1)] * self.nl
        self.stride = np.array([8., 16., 32.])
        self.anchor_grid = np.asarray(anchors, dtype=np.float32).reshape(self.nl, -1, 2)

        self.confThreshold = confThreshold
        self.nmsThreshold = nmsThreshold
        self.objThreshold = objThreshold
        self.input_shape = (self.net.get_inputs()[0].shape[2], self.net.get_inputs()[0].shape[3])

    def resize_image(self, srcimg, keep_ratio=True):
        top, left, newh, neww = 0, 0, self.input_shape[0], self.input_shape[1]
        if keep_ratio and srcimg.shape[0] != srcimg.shape[1]:
            hw_scale = srcimg.shape[0] / srcimg.shape[1]
            if hw_scale > 1:
                newh, neww = self.input_shape[0], int(self.input_shape[1] / hw_scale)
                img = cv2.resize(srcimg, (neww, newh), interpolation=cv2.INTER_AREA)
                left = int((self.input_shape[1] - neww) * 0.5)
                img = cv2.copyMakeBorder(img, 0, 0, left, self.input_shape[1] - neww - left, cv2.BORDER_CONSTANT,
                                         value=0)  # add border
            else:
                newh, neww = int(self.input_shape[0] * hw_scale), self.input_shape[1]
                img = cv2.resize(srcimg, (neww, newh), interpolation=cv2.INTER_AREA)
                top = int((self.input_shape[0] - newh) * 0.5)
                img = cv2.copyMakeBorder(img, top, self.input_shape[0] - newh - top, 0, 0, cv2.BORDER_CONSTANT, value=0)
        else:
            img = cv2.resize(srcimg, self.input_shape, interpolation=cv2.INTER_AREA)
        return img, newh, neww, top, left

    def _make_grid(self, nx=20, ny=20):
        xv, yv = np.meshgrid(np.arange(ny), np.arange(nx))
        return np.stack((xv, yv), 2).reshape((-1, 2)).astype(np.float32)

    def postprocess(self, frame, outs, pad_hw):
        newh, neww, padh, padw = pad_hw
        frameHeight = frame.shape[0]
        frameWidth = frame.shape[1]
        ratioh, ratiow = frameHeight / newh, frameWidth / neww
        # Scan through all the bounding boxes output from the network and keep only the
        # ones with high confidence scores. Assign the box's class label as the class with the highest score.
        classIds = []
        confidences = []
        box_index = []
        boxes = []
        for detection in outs:
            scores = detection[5:]
            classId = np.argmax(scores)
            confidence = scores[classId]
            if confidence > self.confThreshold and detection[4] > self.objThreshold:
                center_x = int((detection[0] - padw) * ratiow)
                center_y = int((detection[1] - padh) * ratioh)
                width = int(detection[2] * ratiow)
                height = int(detection[3] * ratioh)
                left = int(center_x - width / 2)
                top = int(center_y - height / 2)
                classIds.append(classId)
                confidences.append(float(confidence))
                boxes.append([left, top, width, height])

        # Perform non maximum suppression to eliminate redundant overlapping boxes with
        # lower confidences.
        print(boxes)
        indices = cv2.dnn.NMSBoxes(boxes, confidences, self.confThreshold, self.nmsThreshold)
        print(indices)
      
        for i in indices:
            box_index.append(i)

        for i in box_index:
            box = boxes[i]
            left = box[0]
            top = box[1]
            width = box[2]
            height = box[3]
            print(classIds[i], confidences[i])
            frame = self.drawPred(frame, classIds[i], confidences[i], left, top, left + width, top + height)
        return frame

    def drawPred(self, frame, classId, conf, left, top, right, bottom):
        # Draw a bounding box.
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), thickness=2)

        label = '%.2f' % conf
        label = '%s:%s' % (self.classes[classId], label)

        # Display the label at the top of the bounding box
        labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1)
        top = max(top, labelSize[1])
        # cv.rectangle(frame, (left, top - round(1.5 * labelSize[1])), (left + round(1.5 * labelSize[0]), top + baseLine), (255,255,255), cv.FILLED)
        cv2.putText(frame, label, (left, top - 10), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 255, 0), thickness=1)
        return frame

    def detect(self, srcimg):
        img, newh, neww, top, left = self.resize_image(srcimg)
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        img = img.astype(np.float32) / 255.0
        blob = np.expand_dims(np.transpose(img, (2, 0, 1)), axis=0)

        t1 = time.time()
        outs = self.net.run(None, {self.net.get_inputs()[0].name: blob})[0].squeeze(axis=0)
        cost_time = time.time() - t1
        print(outs.shape)
        row_ind = 0
        for i in range(self.nl):
            h, w = int(self.input_shape[0] / self.stride[i]), int(self.input_shape[1] / self.stride[i])
            length = int(self.na * h * w)
            if self.grid[i].shape[2:4] != (h, w):
                self.grid[i] = self._make_grid(w, h)

            outs[row_ind:row_ind + length, 0:2] = (outs[row_ind:row_ind + length, 0:2] * 2. - 0.5 + np.tile(
                self.grid[i], (self.na, 1))) * int(self.stride[i])
            outs[row_ind:row_ind + length, 2:4] = (outs[row_ind:row_ind + length, 2:4] * 2) ** 2 * np.repeat(
                self.anchor_grid[i], h * w, axis=0)
            row_ind += length
        srcimg = self.postprocess(srcimg, outs, (newh, neww, top, left))
        infer_time = 'Inference Time: ' + str(int(cost_time * 1000)) + 'ms'
        cv2.putText(srcimg, infer_time, (5, 20), cv2.FONT_HERSHEY_TRIPLEX, 0.5, (0, 0, 0), thickness=1)
        return srcimg


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--imgpath', type=str, default='/code/dataset/test/images/1605236197942_jpg.rf.6bbdb01c9cd93bb481528cb9c58cb308.jpg', help="image path")
    parser.add_argument('--modelpath', type=str, default='/code/YOLOv5-Lite/runs/train/exp3/weights/e.onnx', help="onnx filepath")
    parser.add_argument('--classfile', type=str, default='/code/YOLOv5-Lite/python_demo/onnxruntime/chicken.names', help="classname filepath")
    parser.add_argument('--confThreshold', default=0.1, type=float, help='class confidence')
    parser.add_argument('--nmsThreshold', default=0.1, type=float, help='nms iou thresh')
    args = parser.parse_args()

    srcimg = cv2.imread(args.imgpath)
    net = yolov5_lite(args.modelpath, args.classfile, confThreshold=args.confThreshold, nmsThreshold=args.nmsThreshold)
    srcimg = net.detect(srcimg.copy())
    cv2.imwrite("/code/YOLOv5-Lite/python_demo/onnxruntime/result.jpg", srcimg)


    # winName = 'Deep learning object detection in onnxruntime'
    # cv2.namedWindow(winName, cv2.WINDOW_NORMAL)
    # cv2.imshow(winName, srcimg)
    # cv2.waitKey(0)
    # # cv2.imwrite('save.jpg', srcimg )
    # cv2.destroyAllWindows()

然后执行

cd YOLOv5-Lite/python_demo/onnxruntime/
python3 ort.py

测试结果,如果没有检测到任何东西,可以调低阈值:

image

摄像头检测代码,改下 ort.py 的主函数就可以:

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--modelpath', type=str, default='/code/YOLOv5-Lite/runs/train/exp3/weights/e.onnx', help="onnx filepath")
    parser.add_argument('--classfile', type=str, default='/code/YOLOv5-Lite/python_demo/onnxruntime/chicken.names', help="classname filepath")
    parser.add_argument('--confThreshold', default=0.1, type=float, help='class confidence')
    parser.add_argument('--nmsThreshold', default=0.1, type=float, help='nms iou thresh')
    args = parser.parse_args()

    cap = cv2.VideoCapture(0)  # 设置为 0 表示使用默认摄像头
    net = yolov5_lite(args.modelpath, args.classfile, confThreshold=args.confThreshold, nmsThreshold=args.nmsThreshold)

    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break
      
        frame = net.detect(frame)

        cv2.imshow('YOLOv5 Lite Object Detection', frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    cap.release()
    cv2.destroyAllWindows()

ncnn

两种思路,一是将 onnx 转为 ncnn,还有一种是用 pnnx 转化(腾讯推荐,据称优化效果更好)

onnx 转

使用 ncnn 的 20210525 版本,在服务器终端执行

wget https://github.com/Tencent/ncnn/archive/refs/tags/20210525.zip
unzip ncnn-20210525.zip
cd ncnn-20210525
mkdir build
cd build
cmake ..
make -j8
make install
./tools/onnx/onnx2ncnn /code/YOLOv5-Lite/runs/train/exp3/weights/e.onnx /code/YOLOv5-Lite/runs/train/exp3/weights/e.param /code/YOLOv5-Lite/runs/train/exp3/weights/e.bin
# 模型优化为fp16
./tools/ncnnoptimize /code/YOLOv5-Lite/runs/train/exp3/weights/e.param /code/YOLOv5-Lite/runs/train/exp3/weights/e.bin /code/YOLOv5-Lite/runs/train/exp3/weights/e2.param /code/YOLOv5-Lite/runs/train/exp3/weights/e2.bin 65536

​​image​​

打开 e2.param,将 Permute 上方的 Reshape 修改为 0 = -1,此步是为了能够动态输入:

​​image​​

ncnn 提供了一份 ncnn-20210525/examples/yolov5.cpp 代码,将/code/YOLOv5-Lite/models/v5Lite-e.yaml 中的 anchors 对应填写到 ncnn-20210525/examples/yolov5.cpp

(因为 yolov5-lite 本就源于 yolov5,在模型的调用推理上是相同的,所以可以直接使用 ncnn 官方的代码,还能省去编写 CMkaeLists.txt 的麻烦)

​​image

将 e2.param 中的 permute 对应填写到 ncnn-20210525/examples/yolov5.cpp 中

​​image​​

对于 ncnn-20210525/examples/yolov5.cpp 中的 parm 和 bin 文件的路径、阈值等信息(我在下方代码中加了注释),请自行更改。

image

由于我使用的是服务器,无法显示视窗,所以注释了显示代码,用户可以根据需求更改

image

这是我修改完的文件

// Tencent is pleased to support the open source community by making ncnn available.
//
// Copyright (C) 2020 THL A29 Limited, a Tencent company. All rights reserved.
//
// Licensed under the BSD 3-Clause License (the "License"); you may not use this file except
// in compliance with the License. You may obtain a copy of the License at
//
// https://opensource.org/licenses/BSD-3-Clause
//
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.

#include "layer.h"
#include "net.h"

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdio.h>
#include <vector>

class YoloV5Focus : public ncnn::Layer
{
public:
    YoloV5Focus()
    {
        one_blob_only = true;
    }

    virtual int forward(const ncnn::Mat& bottom_blob, ncnn::Mat& top_blob, const ncnn::Option& opt) const
    {
        int w = bottom_blob.w;
        int h = bottom_blob.h;
        int channels = bottom_blob.c;

        int outw = w / 2;
        int outh = h / 2;
        int outc = channels * 4;

        top_blob.create(outw, outh, outc, 4u, 1, opt.blob_allocator);
        if (top_blob.empty())
            return -100;

        #pragma omp parallel for num_threads(opt.num_threads)
        for (int p = 0; p < outc; p++)
        {
            const float* ptr = bottom_blob.channel(p % channels).row((p / channels) % 2) + ((p / channels) / 2);
            float* outptr = top_blob.channel(p);

            for (int i = 0; i < outh; i++)
            {
                for (int j = 0; j < outw; j++)
                {
                    *outptr = *ptr;

                    outptr += 1;
                    ptr += 2;
                }

                ptr += w;
            }
        }

        return 0;
    }
};

DEFINE_LAYER_CREATOR(YoloV5Focus)

struct Object
{
    cv::Rect_<float> rect;
    int label;
    float prob;
};

static inline float intersection_area(const Object& a, const Object& b)
{
    cv::Rect_<float> inter = a.rect & b.rect;
    return inter.area();
}

static void qsort_descent_inplace(std::vector<Object>& faceobjects, int left, int right)
{
    int i = left;
    int j = right;
    float p = faceobjects[(left + right) / 2].prob;

    while (i <= j)
    {
        while (faceobjects[i].prob > p)
            i++;

        while (faceobjects[j].prob < p)
            j--;

        if (i <= j)
        {
            // swap
            std::swap(faceobjects[i], faceobjects[j]);

            i++;
            j--;
        }
    }

    #pragma omp parallel sections
    {
        #pragma omp section
        {
            if (left < j) qsort_descent_inplace(faceobjects, left, j);
        }
        #pragma omp section
        {
            if (i < right) qsort_descent_inplace(faceobjects, i, right);
        }
    }
}

static void qsort_descent_inplace(std::vector<Object>& faceobjects)
{
    if (faceobjects.empty())
        return;

    qsort_descent_inplace(faceobjects, 0, faceobjects.size() - 1);
}

static void nms_sorted_bboxes(const std::vector<Object>& faceobjects, std::vector<int>& picked, float nms_threshold)
{
    picked.clear();

    const int n = faceobjects.size();

    std::vector<float> areas(n);
    for (int i = 0; i < n; i++)
    {
        areas[i] = faceobjects[i].rect.area();
    }

    for (int i = 0; i < n; i++)
    {
        const Object& a = faceobjects[i];

        int keep = 1;
        for (int j = 0; j < (int)picked.size(); j++)
        {
            const Object& b = faceobjects[picked[j]];

            // intersection over union
            float inter_area = intersection_area(a, b);
            float union_area = areas[i] + areas[picked[j]] - inter_area;
            // float IoU = inter_area / union_area
            if (inter_area / union_area > nms_threshold)
                keep = 0;
        }

        if (keep)
            picked.push_back(i);
    }
}

static inline float sigmoid(float x)
{
    return static_cast<float>(1.f / (1.f + exp(-x)));
}

static void generate_proposals(const ncnn::Mat& anchors, int stride, const ncnn::Mat& in_pad, const ncnn::Mat& feat_blob, float prob_threshold, std::vector<Object>& objects)
{
    const int num_grid = feat_blob.h;

    int num_grid_x;
    int num_grid_y;
    if (in_pad.w > in_pad.h)
    {
        num_grid_x = in_pad.w / stride;
        num_grid_y = num_grid / num_grid_x;
    }
    else
    {
        num_grid_y = in_pad.h / stride;
        num_grid_x = num_grid / num_grid_y;
    }

    const int num_class = feat_blob.w - 5;

    const int num_anchors = anchors.w / 2;

    for (int q = 0; q < num_anchors; q++)
    {
        const float anchor_w = anchors[q * 2];
        const float anchor_h = anchors[q * 2 + 1];

        const ncnn::Mat feat = feat_blob.channel(q);

        for (int i = 0; i < num_grid_y; i++)
        {
            for (int j = 0; j < num_grid_x; j++)
            {
                const float* featptr = feat.row(i * num_grid_x + j);

                // find class index with max class score
                int class_index = 0;
                float class_score = -FLT_MAX;
                for (int k = 0; k < num_class; k++)
                {
                    float score = featptr[5 + k];
                    if (score > class_score)
                    {
                        class_index = k;
                        class_score = score;
                    }
                }

                float box_score = featptr[4];

                float confidence = sigmoid(box_score) * sigmoid(class_score);

                if (confidence >= prob_threshold)
                {
                    // yolov5/models/yolo.py Detect forward
                    // y = x[i].sigmoid()
                    // y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy
                    // y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh

                    float dx = sigmoid(featptr[0]);
                    float dy = sigmoid(featptr[1]);
                    float dw = sigmoid(featptr[2]);
                    float dh = sigmoid(featptr[3]);

                    float pb_cx = (dx * 2.f - 0.5f + j) * stride;
                    float pb_cy = (dy * 2.f - 0.5f + i) * stride;

                    float pb_w = pow(dw * 2.f, 2) * anchor_w;
                    float pb_h = pow(dh * 2.f, 2) * anchor_h;

                    float x0 = pb_cx - pb_w * 0.5f;
                    float y0 = pb_cy - pb_h * 0.5f;
                    float x1 = pb_cx + pb_w * 0.5f;
                    float y1 = pb_cy + pb_h * 0.5f;

                    Object obj;
                    obj.rect.x = x0;
                    obj.rect.y = y0;
                    obj.rect.width = x1 - x0;
                    obj.rect.height = y1 - y0;
                    obj.label = class_index;
                    obj.prob = confidence;

                    objects.push_back(obj);
                }
            }
        }
    }
}

static int detect_yolov5(const cv::Mat& bgr, std::vector<Object>& objects)
{
    ncnn::Net yolov5;

    yolov5.opt.use_vulkan_compute = true;
    // yolov5.opt.use_bf16_storage = true;

    yolov5.register_custom_layer("YoloV5Focus", YoloV5Focus_layer_creator);

    // original pretrained model from https://github.com/ultralytics/yolov5
    // the ncnn model https://github.com/nihui/ncnn-assets/tree/master/models
    yolov5.load_param("/code/YOLOv5-Lite/runs/train/exp5/weights/e2.param");//param路径
    yolov5.load_model("/code/YOLOv5-Lite/runs/train/exp5/weights/e2.bin");//bin路径

    const int target_size = 320;
    const float prob_threshold = 0.5f;//阈值等
    const float nms_threshold = 0.5f;

    int img_w = bgr.cols;
    int img_h = bgr.rows;

    // letterbox pad to multiple of 32
    int w = img_w;
    int h = img_h;
    float scale = 1.f;
    if (w > h)
    {
        scale = (float)target_size / w;
        w = target_size;
        h = h * scale;
    }
    else
    {
        scale = (float)target_size / h;
        h = target_size;
        w = w * scale;
    }

    ncnn::Mat in = ncnn::Mat::from_pixels_resize(bgr.data, ncnn::Mat::PIXEL_BGR2RGB, img_w, img_h, w, h);

    // pad to target_size rectangle
    // yolov5/utils/datasets.py letterbox
    int wpad = (w + 31) / 32 * 32 - w;
    int hpad = (h + 31) / 32 * 32 - h;
    ncnn::Mat in_pad;
    ncnn::copy_make_border(in, in_pad, hpad / 2, hpad - hpad / 2, wpad / 2, wpad - wpad / 2, ncnn::BORDER_CONSTANT, 114.f);

    const float norm_vals[3] = {1 / 255.f, 1 / 255.f, 1 / 255.f};
    in_pad.substract_mean_normalize(0, norm_vals);

    ncnn::Extractor ex = yolov5.create_extractor();

    ex.input("images", in_pad);

    std::vector<Object> proposals;

    // anchor setting from yolov5/models/yolov5s.yaml

    // stride 8
    {
        ncnn::Mat out;
        ex.extract("onnx::Sigmoid_577", out);

        ncnn::Mat anchors(6);
        anchors[0] = 10.f;
        anchors[1] = 13.f;
        anchors[2] = 16.f;
        anchors[3] = 30.f;
        anchors[4] = 33.f;
        anchors[5] = 23.f;

        std::vector<Object> objects8;
        generate_proposals(anchors, 8, in_pad, out, prob_threshold, objects8);

        proposals.insert(proposals.end(), objects8.begin(), objects8.end());
    }

    // stride 16
    {
        ncnn::Mat out;
        ex.extract("onnx::Sigmoid_599", out);

        ncnn::Mat anchors(6);
        anchors[0] = 30.f;
        anchors[1] = 61.f;
        anchors[2] = 62.f;
        anchors[3] = 45.f;
        anchors[4] = 59.f;
        anchors[5] = 119.f;

        std::vector<Object> objects16;
        generate_proposals(anchors, 16, in_pad, out, prob_threshold, objects16);

        proposals.insert(proposals.end(), objects16.begin(), objects16.end());
    }

    // stride 32
    {
        ncnn::Mat out;
        ex.extract("onnx::Sigmoid_621", out);

        ncnn::Mat anchors(6);
        anchors[0] = 116.f;
        anchors[1] = 90.f;
        anchors[2] = 156.f;
        anchors[3] = 198.f;
        anchors[4] = 373.f;
        anchors[5] = 326.f;

        std::vector<Object> objects32;
        generate_proposals(anchors, 32, in_pad, out, prob_threshold, objects32);

        proposals.insert(proposals.end(), objects32.begin(), objects32.end());
    }

    // sort all proposals by score from highest to lowest
    qsort_descent_inplace(proposals);

    // apply nms with nms_threshold
    std::vector<int> picked;
    nms_sorted_bboxes(proposals, picked, nms_threshold);

    int count = picked.size();

    objects.resize(count);
    for (int i = 0; i < count; i++)
    {
        objects[i] = proposals[picked[i]];

        // adjust offset to original unpadded
        float x0 = (objects[i].rect.x - (wpad / 2)) / scale;
        float y0 = (objects[i].rect.y - (hpad / 2)) / scale;
        float x1 = (objects[i].rect.x + objects[i].rect.width - (wpad / 2)) / scale;
        float y1 = (objects[i].rect.y + objects[i].rect.height - (hpad / 2)) / scale;

        // clip
        x0 = std::max(std::min(x0, (float)(img_w - 1)), 0.f);
        y0 = std::max(std::min(y0, (float)(img_h - 1)), 0.f);
        x1 = std::max(std::min(x1, (float)(img_w - 1)), 0.f);
        y1 = std::max(std::min(y1, (float)(img_h - 1)), 0.f);

        objects[i].rect.x = x0;
        objects[i].rect.y = y0;
        objects[i].rect.width = x1 - x0;
        objects[i].rect.height = y1 - y0;
    }

    return 0;
}

static void draw_objects(const cv::Mat& bgr, const std::vector<Object>& objects)
{
    static const char* class_names[] = {
        "chicken"
    };

    cv::Mat image = bgr.clone();

    for (size_t i = 0; i < objects.size(); i++)
    {
        const Object& obj = objects[i];

        fprintf(stderr, "%d = %.5f at %.2f %.2f %.2f x %.2f\n", obj.label, obj.prob,
                obj.rect.x, obj.rect.y, obj.rect.width, obj.rect.height);

        cv::rectangle(image, obj.rect, cv::Scalar(255, 0, 0));

        char text[256];
        sprintf(text, "%s %.1f%%", class_names[obj.label], obj.prob * 100);

        int baseLine = 0;
        cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);

        int x = obj.rect.x;
        int y = obj.rect.y - label_size.height - baseLine;
        if (y < 0)
            y = 0;
        if (x + label_size.width > image.cols)
            x = image.cols - label_size.width;

        cv::rectangle(image, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)),
                      cv::Scalar(255, 255, 255), -1);

        cv::putText(image, text, cv::Point(x, y + label_size.height),
                    cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));
    }

    // cv::imshow("image", image);
    cv::imwrite("/code/YOLOv5-Lite/cpp_demo/ncnn/result_ncnn.jpg", image);
    // cv::waitKey(0);
}

int main(int argc, char** argv)
{
    if (argc != 2)
    {
        fprintf(stderr, "Usage: %s [imagepath]\n", argv[0]);
        return -1;
    }

    const char* imagepath = argv[1];

    cv::Mat m = cv::imread(imagepath, 1);
    if (m.empty())
    {
        fprintf(stderr, "cv::imread %s failed\n", imagepath);
        return -1;
    }

    std::vector<Object> objects;
    detect_yolov5(m, objects);

    draw_objects(m, objects);

    return 0;
}

最后重新编译 ncnn,在终端执行

cd ncnn-20210525/build
rm -rf *
cmake ..
make -j20
make install 
cd examples
./yolov5 /code/dataset/test/images/1596210416143_jpg.rf.f0b02ec559c973cac67a0ca3de8c8bfc.jpg

image

树莓派部署

树莓派基础

  • 参考微雪的教程 烧录树莓派镜像到 SD 卡,同时开启 SSH 和 WiFi(教程中包括相关基础知识,没接触过树莓派需要先过一遍,了解基本常识),

    注意:记得按照教程,开启 SSH 并配置 wifi,后续有用

  • 连接树莓派(以下方案任选一个即可,上一步已经配置了 wifi,树莓派启动后会自动连接 wifi)

  • 然后参考教程更新源(依据教程,查看树莓派版本,再进一步选择相应的源)

    **********树莓派替换镜像源(终极版!)*********

yolov5-lite 部署

在终端执行

cd
#python版本必须替换为你的实际版本,请到/usr/lib/查看
sudo mv /usr/lib/python3.11/EXTERNALLY-MANAGED /usr/lib/python3.11/EXTERNALLY-MANAGED.bk
sudo apt-get install -y python3-opencv --fix-missing
pip install onnxruntime

执行代码:

import cv2
import time
import numpy as np
import argparse
import onnxruntime as ort

class yolov5_lite():
    def __init__(self, model_pb_path, label_path, confThreshold=0.2, nmsThreshold=0.2, objThreshold=0.5):
        so = ort.SessionOptions()
        so.log_severity_level = 3
        self.net = ort.InferenceSession(model_pb_path, so)
        self.classes = list(map(lambda x: x.strip(), open(label_path, 'r').readlines()))
        self.num_classes = len(self.classes)
        anchors = [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]]
        self.nl = len(anchors)
        self.na = len(anchors[0]) // 2
        self.no = self.num_classes + 5
        self.grid = [np.zeros(1)] * self.nl
        self.stride = np.array([8., 16., 32.])
        self.anchor_grid = np.asarray(anchors, dtype=np.float32).reshape(self.nl, -1, 2)

        self.confThreshold = confThreshold
        self.nmsThreshold = nmsThreshold
        self.objThreshold = objThreshold
        self.input_shape = (self.net.get_inputs()[0].shape[2], self.net.get_inputs()[0].shape[3])

    def resize_image(self, srcimg, keep_ratio=True):
        top, left, newh, neww = 0, 0, self.input_shape[0], self.input_shape[1]
        if keep_ratio and srcimg.shape[0] != srcimg.shape[1]:
            hw_scale = srcimg.shape[0] / srcimg.shape[1]
            if hw_scale > 1:
                newh, neww = self.input_shape[0], int(self.input_shape[1] / hw_scale)
                img = cv2.resize(srcimg, (neww, newh), interpolation=cv2.INTER_AREA)
                left = int((self.input_shape[1] - neww) * 0.5)
                img = cv2.copyMakeBorder(img, 0, 0, left, self.input_shape[1] - neww - left, cv2.BORDER_CONSTANT,
                                         value=0)  # add border
            else:
                newh, neww = int(self.input_shape[0] * hw_scale), self.input_shape[1]
                img = cv2.resize(srcimg, (neww, newh), interpolation=cv2.INTER_AREA)
                top = int((self.input_shape[0] - newh) * 0.5)
                img = cv2.copyMakeBorder(img, top, self.input_shape[0] - newh - top, 0, 0, cv2.BORDER_CONSTANT, value=0)
        else:
            img = cv2.resize(srcimg, self.input_shape, interpolation=cv2.INTER_AREA)
        return img, newh, neww, top, left

    def _make_grid(self, nx=20, ny=20):
        xv, yv = np.meshgrid(np.arange(ny), np.arange(nx))
        return np.stack((xv, yv), 2).reshape((-1, 2)).astype(np.float32)

    def postprocess(self, frame, outs, pad_hw):
        newh, neww, padh, padw = pad_hw
        frameHeight = frame.shape[0]
        frameWidth = frame.shape[1]
        ratioh, ratiow = frameHeight / newh, frameWidth / neww
        # Scan through all the bounding boxes output from the network and keep only the
        # ones with high confidence scores. Assign the box's class label as the class with the highest score.
        classIds = []
        confidences = []
        box_index = []
        boxes = []
        for detection in outs:
            scores = detection[5:]
            classId = np.argmax(scores)
            confidence = scores[classId]
            if confidence > self.confThreshold and detection[4] > self.objThreshold:
                center_x = int((detection[0] - padw) * ratiow)
                center_y = int((detection[1] - padh) * ratioh)
                width = int(detection[2] * ratiow)
                height = int(detection[3] * ratioh)
                left = int(center_x - width / 2)
                top = int(center_y - height / 2)
                classIds.append(classId)
                confidences.append(float(confidence))
                boxes.append([left, top, width, height])

        # Perform non maximum suppression to eliminate redundant overlapping boxes with
        # lower confidences.
        # print(boxes)
        indices = cv2.dnn.NMSBoxes(boxes, confidences, self.confThreshold, self.nmsThreshold)
        # print(indices)
      
        for i in indices:
            box_index.append(i)

        for i in box_index:
            box = boxes[i]
            left = box[0]
            top = box[1]
            width = box[2]
            height = box[3]
            # print(classIds[i], confidences[i])
            frame = self.drawPred(frame, classIds[i], confidences[i], left, top, left + width, top + height)
        return frame

    def drawPred(self, frame, classId, conf, left, top, right, bottom):
        # Draw a bounding box.
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), thickness=2)

        label = '%.2f' % conf
        label = '%s:%s' % (self.classes[classId], label)

        # Display the label at the top of the bounding box
        labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1)
        top = max(top, labelSize[1])
        # cv.rectangle(frame, (left, top - round(1.5 * labelSize[1])), (left + round(1.5 * labelSize[0]), top + baseLine), (255,255,255), cv.FILLED)
        cv2.putText(frame, label, (left, top - 10), cv2.FONT_HERSHEY_TRIPLEX, 4, (0, 255, 0), thickness=2)
        return frame

    def detect(self, srcimg):
        img, newh, neww, top, left = self.resize_image(srcimg)
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        img = img.astype(np.float32) / 255.0
        blob = np.expand_dims(np.transpose(img, (2, 0, 1)), axis=0)

        t1 = time.time()
        outs = self.net.run(None, {self.net.get_inputs()[0].name: blob})[0].squeeze(axis=0)
        cost_time = time.time() - t1
        # print(outs.shape)
        row_ind = 0
        for i in range(self.nl):
            h, w = int(self.input_shape[0] / self.stride[i]), int(self.input_shape[1] / self.stride[i])
            length = int(self.na * h * w)
            if self.grid[i].shape[2:4] != (h, w):
                self.grid[i] = self._make_grid(w, h)

            outs[row_ind:row_ind + length, 0:2] = (outs[row_ind:row_ind + length, 0:2] * 2. - 0.5 + np.tile(
                self.grid[i], (self.na, 1))) * int(self.stride[i])
            outs[row_ind:row_ind + length, 2:4] = (outs[row_ind:row_ind + length, 2:4] * 2) ** 2 * np.repeat(
                self.anchor_grid[i], h * w, axis=0)
            row_ind += length
        srcimg = self.postprocess(srcimg, outs, (newh, neww, top, left))
        infer_time = 'Inference Time: ' + str(int(cost_time * 1000)) + 'ms'
        cv2.putText(srcimg, infer_time, (50, 50), cv2.FONT_HERSHEY_TRIPLEX, 2, (0, 0, 0), thickness=1)
        return srcimg


# if __name__ == '__main__':
#     parser = argparse.ArgumentParser()
#     parser.add_argument('--imgpath', type=str, default='/home/zs/Desktop/智能鸡舍/onnxruntime/R-C.jpg', help="image path")
#     parser.add_argument('--modelpath', type=str, default='/home/zs/Desktop/智能鸡舍/weights/e.onnx', help="onnx filepath")
#     parser.add_argument('--classfile', type=str, default='/home/zs/Desktop/智能鸡舍/onnxruntime/chicken.names', help="classname filepath")
#     parser.add_argument('--confThreshold', default=0.2, type=float, help='class confidence')
#     parser.add_argument('--nmsThreshold', default=0.2, type=float, help='nms iou thresh')
#     args = parser.parse_args()

#     srcimg = cv2.imread(args.imgpath)
#     net = yolov5_lite(args.modelpath, args.classfile, confThreshold=args.confThreshold, nmsThreshold=args.nmsThreshold)
#     srcimg = net.detect(srcimg.copy())
#     cv2.imwrite("/home/zs/Desktop/智能鸡舍/onnxruntime/result.jpg", srcimg)


    # winName = 'Deep learning object detection in onnxruntime'
    # cv2.namedWindow(winName, cv2.WINDOW_NORMAL)
    # cv2.imshow(winName, srcimg)
    # cv2.waitKey(0)
    # # cv2.imwrite('save.jpg', srcimg )
    # cv2.destroyAllWindows()

image

​​

传感器数据读取

dht11

dht11 树莓派
GND GND
5V 5V
S GPIO7

在终端执行

cd
pip install pyserial
git clone https://gitee.com/outlaw-maniac-zhang-san-1/adafruit_-python_-dht_-with_-pi4.git#私有库,已关闭
cd adafruit_-python_-dht_-with_-pi4/
sudo python3 setup.py install
python3
import Adafruit_DHT
humidity, temperature = Adafruit_DHT.read_retry(11, 4)
humidity,temperature

p82865

TVOC-CO2 气体传感器

TVOC-CO2 气体传感器 树莓派
GND GND
5V 5V
A TX
B RX

在终端输入:sudo raspi-config 打开界面设置

Interfacing Options→serial→ 否 → 是

29f3c0c3e76f4a149b75b76f16bcf0d6

e43d80480e6b4c5b9ca4a94978b08bfe

56d6812c86cb4744b998e2197e722544

在终端输入:ls -al /dev/查看设备

image

代码如下(难道是模块有问题,数据一直是 0):

import serial

def parse_uart_data(data):
    if len(data) < 9:
        print("Invalid data length")
        return None, None, None
  
    # 解析模块地址
    module_address = (data[0] << 8) + data[1]
  
    # 计算校验和
    checksum = sum(data[:-1]) & 0xFF

    # 检查校验和
    if checksum != data[8]:
        print("Checksum mismatch")
        return None, None, None

    # 计算TVOC、CH2O和CO2值
    TVOC = (data[2] << 8 + data[3]) * 0.001
    CH2O = (data[4] << 8 + data[5]) * 0.001
    CO2 = (data[6] << 8 + data[7])

    return TVOC, CH2O, CO2

def read_uart_data(port='/dev/ttyS0', baudrate=9600, bytesize=8, parity='N', stopbits=1, timeout=1):
    try:
        ser = serial.Serial(port, baudrate=baudrate, bytesize=bytesize, parity=parity, stopbits=stopbits, timeout=timeout)

        while True:
            # 读取数据
            data = ser.read(9)
            print(data)
          
            # 解析数据
            TVOC, CH2O, CO2 = parse_uart_data(data)
          
            # 打印解析结果
            if TVOC is not None:
                print("TVOC:", TVOC)
            if CH2O is not None:
                print("CH2O:", CH2O)
            if CO2 is not None:
                print("CO2:", CO2)

    except KeyboardInterrupt:
        ser.close()

if __name__ == "__main__":
    read_uart_data()

image

烟雾传感器(MQ2)

接线

MQ2 烟雾传感器 pcf8591 树莓派
GND GND GND
VCC VCC 5V
A0 AIN0
SCL SCL1
SDA SDA1

登陆上树莓派后,输入命令:sudo raspi-config 后回车;选择 Interfacing Options 后回车,选择 I2C 回车, 选择 YES 回车,最后就设置成功啦!如图所示:

28f12bd673b4486b923f30dd4fbb2a8f

076a301faf994a47905f135310862459

938c83a8eb1e4f19a9ad4ef2f9e22f45

1a4e153a97734b68ade017953724d0f3

1154744bb773459f8aed0f8ea94a5fa3

如果想要查看你的传感器有没有成功连接树莓派,输入命令 i2cdetect -y 1(如果不行在命令前面加个 sudo);(图中的 48 是 pcf8591 的地址)如图:

image

代码如下

import time
import math
import smbus
class PCF8591:
    def __init__(self):
        self.CAL_PPM =20  	    # 校准环境中PPM值
        self.RL = 5		        # RL阻值
        self.bus = smbus.SMBus(1)  # 自动选择可用的I2C总线
        self.address = self.get_address()

    def get_address(self):
        # 遍历0x48到0x4F之间的地址,尝试读取设备的响应
        for addr in range(0x48, 0x50):
            try:
                self.bus.read_byte(addr)
                print(addr)
                return addr
            except IOError:
                pass
        raise RuntimeError("未找到PCF8591模块")

    def read(self, chn):
        if chn == 0:
            self.bus.write_byte(self.address, 0x40)
        elif chn == 1:
            self.bus.write_byte(self.address, 0x41)
        elif chn == 2:
            self.bus.write_byte(self.address, 0x42)
        elif chn == 3:
            self.bus.write_byte(self.address, 0x43)
        tmp = self.bus.read_byte(self.address)
        Vrl = 5 * tmp / 255        #5V   ad 为8位
        RS = (5 - Vrl) / Vrl * self.RL
        R0 = RS / pow(self.CAL_PPM / 613.9, 1 / -2.074)
        ppm = 613.9 * pow(RS / R0, -2.074)
        return tmp,ppm

    def write(self, val):
        temp = val*(255 - 125) / 255 + 125
        temp = int(temp)
        self.bus.write_byte_data(self.address, 0x40, temp)

if __name__ == "__main__":
    pcf8591 = PCF8591()
    while True:
        tmp,ppm = pcf8591.read(0)
        print(tmp,ppm)
        pcf8591.write(tmp)
        time.sleep(0.5)

结果

image

执行结果

数据上报阿里云

创建产品和设备

参考阿里官方教程,使用个人用户的免费示例试用即可

image

创建产品和对应设备并获取设备证书_物联网应用开发-阿里云帮助中心 (aliyun.com)

image

定义产品物模型

参考阿里官方教程,定义温度(​Temperature​)、湿度(Humidity​)、TVOC、CO2、甲醛(Formaldehyde)、鸡群(Chicken)物模型,注意请把英文名称设置为物模型的标识符,这将在代码中用到

物联网应用开发如何为产品定义物模型_物联网应用开发-阿里云帮助中心 (aliyun.com)

举例(标识符最重要):

​​image​​

定义完成:
​​image​​

安装依赖程序

传感器的数据传递功能需要安装依赖程序开启。

  1. 在命令窗口执行以下命令,完成程序安装。
sudo apt-get update
sudo apt-get install -y build-essential python-dev-is-python3 git
cd
git clone https://gitee.com/outlaw-maniac-zhang-san-1/paho-mqtt-1.6.1.git#私有库,已关闭
cd paho-mqtt-1.6.1
sudo python3 setup.py install

程序解读(run.py)

以下是阿里云官方提供的数据上传代码

  • 导入库

    #!/usr/bin/python3
    import aliLink,mqttd,rpi
    import time,json
    import Adafruit_DHT
    

  • 三元素(iot 后台获取)

    ProductKey = '***'
    DeviceName = 'raspberrypi4-******'
    DeviceSecret = "assef***"
    
  • topic (iot 后台获取)

    POST = '/sys/***/raspberrypi4-***/thing/event/property/post'  # 上报消息到云
    POST_REPLY = '/sys/***/raspberrypi4-***/thing/event/property/post_reply'
    SET = '/sys/***/raspberrypi4-***/thing/service/property/set'  # 订阅云端指令
    

  • 消息回调(云端下发消息的回调函数)

    def on_message(client, userdata, msg):
      # print(msg.payload)
      Msg = json.loads(msg.payload)
      switch = Msg['params']['PowerLed']
      rpi.powerLed(switch)
      print(msg.payload)  # 开关值
    
  • 连接回调(与阿里云建立链接后的回调函数)

    def on_connect(client, userdata, flags, rc):
      pass
    

  • 链接信息

    Server, ClientId, userNmae, Password = aliLink.linkiot(DeviceName, ProductKey, DeviceSecret)
    

  • mqtt 链接

    mqtt = mqttd.MQTT(Server, ClientId, userNmae, Password)
    mqtt.subscribe(SET)   # 订阅服务器下发消息topic
    mqtt.begin(on_message, on_connect)
    

  • 信息获取上报,每 10 秒钟上报一次系统参数

    if __name__ == "__main__":
        # 信息获取上报,每10秒钟上报一次系统参数
        while True:
            time.sleep(1)
            # 构建与云端模型一致的消息结构
            updateMsn = {
                'MQ2':20,
                'HC_SR05':1,
                'buzzer':0,
                'face_recognition': 1
            }
            JsonUpdataMsn = aliLink.Alink(updateMsn)
            print(JsonUpdataMsn)
            mqtt.push(POST,JsonUpdataMsn) # 定时向阿里云IOT推送我们构建好的Alink协议数据
    

运行程序

在命令行窗口执行以下命令。

python3 run.py

数据结果如图所示。

image​​

  • 在 IoT 平台查询上报的数据。

    前往设备详情页,单击物模型数据 > 运行状态,查看新增的机房温度和湿度数据。

    接入后的效果图如下所示。

    ​​image​​

数据备份

log 函数

def log_data(data=None, error=None):
    """
    记录数据到日志文件
    """
    # 确保日志目录存在
    log_dir = 'logs'
    if not os.path.exists(log_dir):
        os.makedirs(log_dir)

    # 构建日志文件路径
    log_file = os.path.join(log_dir, f"{datetime.now().strftime('%Y-%m-%d')}.log")

    # 打开日志文件并追加数据
    with open(log_file, 'a') as f:
        timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        if error:
            log_entry = f"{timestamp}: {error}\n"
        else:
            log_entry = f"{timestamp}: {data}\n"
        f.write(log_entry)

结果

image

云平台暨整合代码运行流程(甲方定制,付费内容)

配置好的系统,用户名为:zs 密码为:zs

默认连接 wifi,名为:G 密码为:goodlife

开机后,请先参考阿里云部分教程,完成基础信息填写

image

然后在终端执行

cd /home/zs/Desktop/智能鸡舍/flask
python3 app.py

打开浏览器,输入 http://树莓派 IP 地址:5000 即可看到:

5be8134f4f10276e9185c17a2921902

image


  1. Docker 添加用户组

    安装 docker

    直接在 bash

    # Add Docker's official GPG key:
    sudo apt-get update
    sudo apt-get install ca-certificates curl gnupg
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    
    # Add the repository to Apt sources:
    echo \
      "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    sudo apt-get update
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    
    sudo docker run hello-world
    

    执行完如果没问题的话,执行如下指令试试

    docker ps -a
    

    没提示没有“docker”这个命令就成功了

    配置用户组

    # 添加docker用户组,一般已存在,不需要执行
    sudo groupadd docker
    # 将登陆用户加入到docker用户组中
    sudo gpasswd -a $USER docker
    # 更新用户组
    newgrp docker
    # 测试docker命令是否可以使用sudo正常使用
    docker version
    

4 操作
Zhangshanshan 在 2024-04-30 13:11:48 更新了该帖
Zhangshanshan 在 2024-04-28 12:25:47 更新了该帖
Zhangshanshan 在 2024-04-28 12:23:56 更新了该帖
Zhangshanshan 在 2024-04-27 20:42:35 更新了该帖

相关帖子

欢迎来到这里!

我们正在构建一个小众社区,大家在这里相互信任,以平等 • 自由 • 奔放的价值观进行分享交流。最终,希望大家能够找到与自己志同道合的伙伴,共同成长。

注册 关于
请输入回帖内容 ...