fabric简介和快速安装

系统环境

1 Ubuntu

  • 版本:18.04

2 go 环境

  • 版本 : 1.14.2

3 docker

安装Docker-CE

  • 卸载旧版本docker
sudo apt-get remove docker docker-engine docker.io
  • 添加HTTPS协议,允许apt从HTTPS安装软件包
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
  • 安装GPG证书
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
  • 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
  • 更新并安装Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce

将当前用户添加到Docker用户组

  • 创建docker用户组
sudo groupadd docker
  • 将当前用户添加到docker用户组
sudo usermod -aG docker $USER

将docker镜像更改为国内镜像

  • 编辑daemon.json文件,如果没有该文件自行创建
sudo vim /etc/docker/daemon.json
  • 文件中添加以下内容
{
"registry-mirrors":["https://obou6wyb.mirror.aliyuncs.com","https://registry.docker-cn.com","http://hub-mirror.c.163.com"]
}
  • 最后重启服务
sudo systemctl daemon-reload
sudo systemctl restart docker
  • 查看docker版本,看是否安装成功
docker version
  • 执行docker info,如果结果中含有如下内容则说明镜像配置成功
...
 Registry Mirrors:
  https://obou6wyb.mirror.aliyuncs.com/
  https://registry.docker-cn.com/
  http://hub-mirror.c.163.com/
 Live Restore Enabled: false
 ...

安装Docker-Compose

  • 方法一:
sudo curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

sudo curl -L https://get.daocloud.io/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose

用方法一下载完后要设置权限

sudo chmod +x /usr/local/bin/docker-compose
  • 方法二:
sudo apt-get install python-pip
sudo pip install docker-compose
  • 下载完成后查看是否安装成功
docker-compose version

拉取fabric镜像

docker pull hyperledger/fabric-peer:1.4.6
docker pull hyperledger/fabric-tools:1.4.6
docker pull hyperledger/fabric-orderer:1.4.6
docker pull hyperledger/fabric-javaenv:1.4.6
docker pull hyperledger/fabric-ca:1.4.6
docker pull hyperledger/fabric-ccenv:1.4.6
docker pull hyperledger/fabric-zookeeper:0.4.18
docker pull hyperledger/fabric-kafka:0.4.18
docker pull hyperledger/fabric-couchdb:0.4.18
docker pull hyperledger/fabric-baseimage:0.4.18
docker pull hyperledger/fabric-baseos:0.4.18
docker pull hyperledger/fabric-membersrvc:latest 

docker pull hyperledger/fabric-peer:1.4.6 && docker pull hyperledger/fabric-tools:1.4.6 && docker pull hyperledger/fabric-orderer:1.4.6 && docker pull hyperledger/fabric-javaenv:1.4.6 && docker pull hyperledger/fabric-ca:1.4.6 && docker pull hyperledger/fabric-ccenv:1.4.6 && docker pull hyperledger/fabric-zookeeper:0.4.18 && docker pull hyperledger/fabric-kafka:0.4.18 && docker pull hyperledger/fabric-couchdb:0.4.18 && docker pull hyperledger/fabric-baseimage:0.4.18 && docker pull hyperledger/fabric-baseos:0.4.18 && docker pull hyperledger/fabric-membersrvc:latest

  • 拉取完镜像镜像后,可以查看docker镜像库
docker images

(忽略)为了方便配置Docker Compose,将所有镜像的tag都改为latest,执行如下格式的命令:

docker tag IMAGEID(镜像ID) REPOSITORY:TAG(仓库:标签)

例如:

docker tag 68914607b3a5 docker.io/hyperledger/fabric-tools:latest

将标签全部更改完后,再次查看镜像库

docker images

一 Fabric编译和安装

1.1 go 环境变量配置

vim /etc/profile
export GOROOT=/opt/go           #Golang源代码目录,安装目录
export GOPATH=/opt/gocode       #Golang项目代码目录
export PATH=$GOROOT/bin:$PATH   #Linux环境变量
export GOBIN=$GOPATH/bin        #go install后生成的可执行命令存放路径source /etc/profile
source /etc/profile
tar -C /opt/go -xzvf go1.14.2.linux-amd64.tar.gz

1.2 创建目录并下载代码

  • 创建hyperledger目录
mkdir -p $GOPATH/src/github.com/hyperledger
cd $GOPATH/src/github.com/hyperledger
  • 下载 fabric 源码
git clone https://github.com/hyperledger/fabric

去码云下载比较快

git clone https://gitee.com/mirrors/hyperledger-fabric.git fabric

1.2 安装相关依赖软件

go get github.com/golang/protobuf/protoc-gen-go
mkdir -p $GOPATH/src/github.com/hyperledger/fabric/build/docker/gotools/bin
cp  $GOPATH/bin/protoc-gen-go $GOPATH/src/github.com/hyperledger/fabric/build/docker/gotools/bin

注意:go get 之后编译好的文件会存放到环境变量$GOBIN对应的目录中,如果没有设置$GOBIN的值,系统默认将生成的文体存放到$GOPATH/bin下面.

1.3 编译Fabric的模块

  • ==进入到Fabirc源码所在的文件夹==
cd $GOPATH/src/github.com/hyperledger/fabric
  • ==切换分支v1.4.6==
git checkout -b v1.4.6

源码下载完成之后,并不能直接使用,我们需要对其进行编译,生成所需要的各种节点及相应的工具。我们直接使用源码中提供的 Makefile 来进行编译,首先对 Makefile 文件进行编辑,指定相应的版本

BASE_VERSION = 1.4.6

执行以下命令可以一次完成Fabric5个主要模块的编译过程,具体的命令如下所示:

make release

上述命令执行完成之后,会自动将将编译好的二进制文件存放在以下路径中:

$GOPATH/src/github.com/hyperledger/fabric/release/linux-amd64/bin
  • 查看版本是否符合
cd $GOPATH/src/github.com/hyperledger/fabric/release/linux-amd64/bin
./peer version

或者

make orderer
make peer
make configtxlator
make certificategen 
make configtxgen

查看 .build/bin 目录

ll .build/bin/

  • Fabric的模块
模块名称 功能
peer 主节点模块,负责存储区块链数据,运行维护链码
orderer 交易打包、排序模块
certificategen 组织和证书生成模块
configtxgen 区块和交易生成模块
configtxlator 区块和交易解析模块

  • 对于Macos系统,在编译之前需要进行以下设置:
  • 打开文件$GOPATH/src/github.com/hyperledger/fabric/Makefile
  • 找到其中的第一个GO_LDFLAGS字符串的位置,在该字符串所在行的在行的末尾加上字符串-s
  • 保存文件Makefile

1.4 Fabric模块的安装

就是将上述模块添加到环境变量里去

  • ==第一步 Ubuntu和Centos将Fabric模块编译后的文件复制到系统文件夹中==
cp $GOPATH/src/github.com/hyperledger/fabric/release/linux-amd64/bin/* /usr/local/bin

MacOS上面将Fabric模块编译后的文件复制到系统文件夹中的方法如下:

cp $GOPATH/src/github.com/hyperledger/fabric/release/darwin-amd64/bin/* /usr/local/bin

复制成功之后通过以下命令修改文件的执行权限,否则无法执行。

sudo chmod -R 775  /usr/local/bin/configtxgen
sudo chmod -R 775  /usr/local/bin/configtxlator
sudo chmod -R 775  /usr/local/bin/certificategen
sudo chmod -R 775  /usr/local/bin/peer
sudo chmod -R 775  /usr/local/bin/orderer

通过上面这些命令之后,可以在系统的任何路径下面运行这些模块了。下面通过一组命令来进检查安装过程是否成功。

  • ==第二步: 检查安装==

采用 version 命令行选项

peer version
peer:
 Version: 1.4.6
 Commit SHA: 635fa7bc8
 Go version: go1.14.2
 OS/Arch: linux/amd64
 Chaincode:
  Base Image Version: 0.4.18
  Base Docker Namespace: hyperledger
  Base Docker Label: org.hyperledger.fabric
  Docker Namespace: hyperledger
orderer version
orderer:
 Version: 1.4.6
 Commit SHA: 635fa7bc8
 Go version: go1.14.2
 OS/Arch: linux/amd64

二 快速启动Fabric一个Fabric应用

2.1 第一步:生成组织结构及加密证书

加密配置文件为crypto-config.yaml

2.1.1 创建存放证书的文件夹

  • 将生成的证书文件存放在certificate文件夹下
$GOPATH/src/github.com/hyperledger/certificate
cd $GOPATH/src/github.com/hyperledger/certificate
  • ==创建存放证书的文件夹的命令如下所示:==
mkdir -p $GOPATH/src/github.com/hyperledger/certificate

2.1.2 证书文件的生成命令

  • cryptogen 提供了一个命令可以获取cryptogen模块所需要的配置文件的样式,该命令如下所示:
cryptogen showtemplate

2.1.3 编写配置文件

  • 文件名:crypto-config.yaml
OrdererOrgs:
  - Name: Orderer    # Orderer的名称
    Domain: binny.com    # 域名
    Specs:
      - Hostname: orderer    # hostname + Domain的值组成Orderer节点的完整域名

PeerOrgs:
  - Name: Org1
    Domain: org1.binny.com
  
    Template:
      Count: 1
    Users:
      Count: 1

  - Name: Org2
    Domain: org2.binny.com
  
    Template:
      Count: 1
    Users:
      Count: 1

2.1.4 生成证书文件

  • ==执行如下命令:==
cryptogen generate --config=crypto-config.yaml --output ./crypto-config

该命令执行完成之后我们会发现在文件夹$GOPATH/src/github.com/hyperledger/certificate中会新增加一个文件夹crypto-config,里面存放有本例的相关配置文件,可以通过tree命令查看生成证书文件的内容。

org1.binny.com
org2.binny.com

在生成的目录结构中最关键的是各个资源下的 msp 目录内容,存储了生成的代表 MSP 实体身份的各种证书文件,一般包括:

  • admincerts :管理员的身份证书文件
  • cacerts :信任的根证书文件

  • tlscacerts::TLS 连接用的证书

  • config.yaml (可选):记录OrganizationalUnitldentifiers 信息,包括根证书位置和ID信息

这些身份文件随后可以分发到对应的Orderer 节点和Peer 节点上,并放到对应的MSP路径下,用于签名验证使用。

2.1.5 FAQ

  • 组织结构中可以添加新的组织吗?

生成组织结构前,可以通过 certificate-config.yaml 配置文件指定具体的组织信息,如果是多个组织,只需要在该配置文件中 PeerOrgs 节点最后添加新的组织信息即可。

  • [x]Org 组织中可以指定多个 Peer 节点吗?

可以指定多个节点,只需要修改 Template 下的 Count 值即可(该值代表组织下有几个节点)

  • [x]组织结构生成之后可以随时添加或修改吗?

目前,Hyperledger Fabric 无法对已生成的组织结构进行修改;所以需要提前做好规划。在未来会支持对组织结构的节点进行动态修改

  • [ ]orderer 组织和 peer 组织的区别

2.1.6 节点域名映射

通过上述步骤所有的证书文件都已经生成完毕,现在需要将测试域名映射到本机的IP地址上面,否则后面的操作可能会出现错误。

执行以下命令以便提取相关的域名,

cd $GOPATH/src/github.com/hyperledger/certificate
tree -L 5

使用上述命令就可观察发现,一个orderer 域名和 2 个 peer 域名


192.168.23.79 peer0.org1.caohuan.com
192.168.23.79 peer0.org2.caohuan.com
192.168.23.79 peer1.org2.caohuan.com
192.168.23.79 peer1.org1.caohuan.com
192.168.23.79 orderer.caohuan.com
orderer.binny.com
peer0.org1.binny.com
peer0.org2.binny.com

打开端映射文件,将域名代理为本机 IP:10.211.55.20

vi /etc/hosts
10.211.55.20 orderer.binny.com
10.211.55.20 peer0.org1.binny.com
10.211.55.20 peer1.org1.binny.com
10.211.55.20 peer0.org2.binny.com
10.211.55.20 peer1.org2.binny.com

输入以上内容之后保存/etc/hosts文件,然后用ping命令测试以下配置是否正确。

ping peer0.org1.binny.com

2.2 第二步 创建orderer 服务,启动创始块的生成

Fabric是基于区块链的分布式账本,每个账本都拥有自己的区块链,账本的区块链中会存储账本的交易,账本区块链中的第一个区块是个例外,该区块不存在交易数据而是存储配置信息,通常将账本的第一个区块成为创始块。综上所述,Fabric中账本的第一个区块是需要手动生成的。

configtxgen模块是专门负责生成系统的创始块Channel的创始块

configtxgen模块也需要一个配置文件configtx.yaml来定义相关的属性。

在Fabric源码中提供的configtxgen模块所需要的配置文件的例子。该文件的路径也是

$GOPATH/src/github.com/hyperledger/fabric/sampleconfig

在这个目录下面有一个名为configtx.yaml的文件,对这个文件进行修即可使用。

2.2.1 编写创世块的配置文件


创建服务启动初始区块及应用通道交易配置文件需要指定 Orderer 服务的相关配置以及当前的联盟信息, 这些信息定义在一个名为 configtx.yaml 文件中。

  • configtx.yaml 配置文件内容如下:
Organizations:
    - &OrdererOrg
        Name: OrdererOrg
        ID: OrdererMSP
        MSPDir: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/ordererOrganizations/binny.com/msp 
        
    - &Org1
        Name: Org1MSP
        ID: Org1MSP
        MSPDir: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/msp 
        AnchorPeers:
            - Host: peer0.org1.binny.com
              Port: 7051

    - &Org2
        Name: Org2MSP
        ID: Org2MSP
        MSPDir: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org2.binny.com/msp 
        AnchorPeers:
            - Host: peer0.org2.binny.com
              Port: 7051

Orderer: &OrdererDefaults
    OrdererType: solo
    Addresses:
        - orderer.binny.com:7050
    BatchTimeout: 2s
    BatchSize:
        MaxMessageCount: 10
        AbsoluteMaxBytes: 98 MB
        PreferredMaxBytes: 512 KB
    Kafka:
        Brokers:
            - 192.168.23.149:9092
    Organizations:
    
Application: &ApplicationDefaults
    Organizations:  
      
Profiles:

    TestTwoOrgsOrdererGenesis:
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *Org1
                    - *Org2                  
    
    TestTwosOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2

2.2.2 Orderer服务启动初始区块的创建

==创建 Orderer 服务启动初始区块==

cd $GOPATH/src/github.com/hyperledger/certificate
export set FABRIC_CFG_PATH=$GOPATH/src/github.com/hyperledger/certificate 

==生成创始块文件==

configtxgen -profile  TestTwoOrgsOrdererGenesis -outputBlock  ./orderer.genesis.block
  • 注意
  • TestTwoOrgsOrdererGenesis:在配置文件中
  • orderer.genesis.block:名字随意,但是后续配置文件中要用到

上述命令执行完成之后会在文件夹$GOPATH/src/github.com/hyperledger/certificate中生成文件orderer.genesis.block。这是Fabric系统的创始块文件。

2.2.3 创建必须的应用通道交易配置文件

  • 指定通道名称的环境变量,名字随意

==设置临时变量,channel 名字==

export set CHANNEL_NAME=binnychannel

因为我们后面的命令需要多次使用同一个通道名称,所以先指定一个通道名称将其设为环境变量,后期需要使用该通道名称时时只需要使用对应的环境变量名称即可.


  • 生成应用通道交易配置文件

指定使用 configtx.yaml 配置文件中的 TestTwosOrgsChannel 模板, 来生成新建通道的配置交易文件,TestTwosOrgsChannel 模板指定了 Org1 和 Org2 都属于后面新建的应用通道

configtxgen -profile TestTwosOrgsChannel -outputCreateChannelTx ./binnychannel.tx -channelID $CHANNEL_NAME

2.2.4 生成锚节点更新配置文件

同样基于 configtx.yaml 配置文件中的 TestTwosOrgsChannel 模板,为每个组织分别生成锚节点更新配置,且注意指定对应的组织名称。

  • channelID 要和上面的一样
configtxgen -profile TestTwosOrgsChannel -outputAnchorPeersUpdate ./Org1MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org1MSP

configtxgen -profile TestTwosOrgsChannel -outputAnchorPeersUpdate ./Org2MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org2MSP

上面命令执行完成之后会在相应的文件夹下面生成文件 Org1MSPanchors.txOrg2MSPanchors.tx ,这些文件在后面会被使用到。

4.4.4 ==Orderer节点的启动==

orderer.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

---
################################################################################
#
#   Orderer Configuration
#
#   - This controls the type and configuration of the orderer.
#
################################################################################
General:

    # Ledger Type: The ledger type to provide to the orderer.
    # Two non-production ledger types are provided for test purposes only:
    #  - ram: An in-memory ledger whose contents are lost on restart.
    #  - json: A simple file ledger that writes blocks to disk in JSON format.
    # Only one production ledger type is provided:
    #  - file: A production file-based ledger.
    LedgerType: file

    # Listen address: The IP on which to bind to listen.
    ListenAddress: 127.0.0.1

    # Listen port: The port on which to bind to listen.
    ListenPort: 7050

    # TLS: TLS settings for the GRPC server.
    TLS:
        Enabled: false
        # PrivateKey governs the file location of the private key of the TLS certificate.
        PrivateKey: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/ordererOrganizations/binny.com/orderers/orderer.binny.com/tls/server.key
        # Certificate governs the file location of the server TLS certificate.
        Certificate: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/ordererOrganizations/binny.com/orderers/orderer.binny.com/tls/server.crt
        RootCAs:
          - /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/ordererOrganizations/binny.com/orderers/orderer.binny.com/tls/ca.crt
        ClientAuthRequired: false
        ClientRootCAs:
    # Keepalive settings for the GRPC server.
    Keepalive:
        # ServerMinInterval is the minimum permitted time between client pings.
        # If clients send pings more frequently, the server will
        # disconnect them.
        ServerMinInterval: 60s
        # ServerInterval is the time between pings to clients.
        ServerInterval: 7200s
        # ServerTimeout is the duration the server waits for a response from
        # a client before closing the connection.
        ServerTimeout: 20s
    # Cluster settings for ordering service nodes that communicate with other ordering service nodes
    # such as Raft based ordering service.
    Cluster:
        # SendBufferSize is the maximum number of messages in the egress buffer.
        # Consensus messages are dropped if the buffer is full, and transaction
        # messages are waiting for space to be freed.
        SendBufferSize: 10
        # ClientCertificate governs the file location of the client TLS certificate
        # used to establish mutual TLS connections with other ordering service nodes.
        ClientCertificate:
        # ClientPrivateKey governs the file location of the private key of the client TLS certificate.
        ClientPrivateKey:
        # The below 4 properties should be either set together, or be unset together.
        # If they are set, then the orderer node uses a separate listener for intra-cluster
        # communication. If they are unset, then the general orderer listener is used.
        # This is useful if you want to use a different TLS server certificates on the
        # client-facing and the intra-cluster listeners.

        # ListenPort defines the port on which the cluster listens to connections.
        ListenPort:
        # ListenAddress defines the IP on which to listen to intra-cluster communication.
        ListenAddress:
        # ServerCertificate defines the file location of the server TLS certificate used for intra-cluster
        # communication.
        ServerCertificate:
        # ServerPrivateKey defines the file location of the private key of the TLS certificate.
        ServerPrivateKey:
    # Genesis method: The method by which the genesis block for the orderer
    # system channel is specified. Available options are "provisional", "file":
    #  - provisional: Utilizes a genesis profile, specified by GenesisProfile,
    #                 to dynamically generate a new genesis block.
    #  - file: Uses the file provided by GenesisFile as the genesis block.
    GenesisMethod: file

    # Genesis profile: The profile to use to dynamically generate the genesis
    # block to use when initializing the orderer system channel and
    # GenesisMethod is set to "provisional". See the configtx.yaml file for the
    # descriptions of the available profiles. Ignored if GenesisMethod is set to
    # "file".
    GenesisProfile: TestTwoOrgsOrdererGenesis

    # Genesis file: The file containing the genesis block to use when
    # initializing the orderer system channel and GenesisMethod is set to
    # "file". Ignored if GenesisMethod is set to "provisional".
    # 修改为你自己的创始块文件的路径
    GenesisFile: /opt/gocode/src/github.com/hyperledger/certificate/orderer.genesis.block

    # LocalMSPDir is where to find the private certificate material needed by the
    # orderer. It is set relative here as a default for dev environments but
    # should be changed to the real location in production.
    LocalMSPDir: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/ordererOrganizations/binny.com/orderers/orderer.binny.com/msp

    # LocalMSPID is the identity to register the local MSP material with the MSP
    # manager. IMPORTANT: The local MSP ID of an orderer needs to match the MSP
    # ID of one of the organizations defined in the orderer system channel's
    # /Channel/Orderer configuration. The sample organization defined in the
    # sample configuration provided has an MSP ID of "SampleOrg".
    LocalMSPID: OrdererMSP

    # Enable an HTTP service for Go "pprof" profiling as documented at:
    # https://golang.org/pkg/net/http/pprof
    Profile:
        Enabled: false
        Address: 0.0.0.0:6060

    # BCCSP configures the blockchain certificate service providers.
    BCCSP:
        # Default specifies the preferred blockchain certificate service provider
        # to use. If the preferred provider is not available, the software
        # based provider ("SW") will be used.
        # Valid providers are:
        #  - SW: a software based certificate provider
        #  - PKCS11: a CA hardware security module certificate provider.
        Default: SW

        # SW configures the software based blockchain certificate provider.
        SW:
            # TODO: The default Hash and Security level needs refactoring to be
            # fully configurable. Changing these defaults requires coordination
            # SHA2 is hardcoded in several places, not only BCCSP
            Hash: SHA2
            Security: 256
            # Location of key store. If this is unset, a location will be
            # chosen using: 'LocalMSPDir'/keystore
            FileKeyStore:
                KeyStore:

        # Settings for the PKCS#11 crypto provider (i.e. when DEFAULT: PKCS11)
        PKCS11:
            # Location of the PKCS11 module library
            Library:
            # Token Label
            Label:
            # User PIN
            Pin:
            Hash:
            Security:
            FileKeyStore:
                KeyStore:

    # Authentication contains configuration parameters related to authenticating
    # client messages
    Authentication:
        # the acceptable difference between the current server time and the
        # client's time as specified in a client request message
        TimeWindow: 15m

################################################################################
#
#   SECTION: File Ledger
#
#   - This section applies to the configuration of the file or json ledgers.
#
################################################################################
FileLedger:

    # Location: The directory to store the blocks in.
    # NOTE: If this is unset, a new temporary location will be chosen every time
    # the orderer is restarted, using the prefix specified by Prefix.
    Location: /opt/gocode/src/github.com/hyperledger/certificate/orderdata

    # The prefix to use when generating a ledger directory in temporary space.
    # Otherwise, this value is ignored.
    Prefix: hyperledger-fabric-ordererledger

################################################################################
#
#   SECTION: RAM Ledger
#
#   - This section applies to the configuration of the RAM ledger.
#
################################################################################
RAMLedger:

    # History Size: The number of blocks that the RAM ledger is set to retain.
    # WARNING: Appending a block to the ledger might cause the oldest block in
    # the ledger to be dropped in order to limit the number total number blocks
    # to HistorySize. For example, if history size is 10, when appending block
    # 10, block 0 (the genesis block!) will be dropped to make room for block 10.
    HistorySize: 1000

################################################################################
#
#   SECTION: Kafka
#
#   - This section applies to the configuration of the Kafka-based orderer, and
#     its interaction with the Kafka cluster.
#
################################################################################
Kafka:

    # Retry: What do if a connection to the Kafka cluster cannot be established,
    # or if a metadata request to the Kafka cluster needs to be repeated.
    Retry:
        # When a new channel is created, or when an existing channel is reloaded
        # (in case of a just-restarted orderer), the orderer interacts with the
        # Kafka cluster in the following ways:
        # 1. It creates a Kafka producer (writer) for the Kafka partition that
        # corresponds to the channel.
        # 2. It uses that producer to post a no-op CONNECT message to that
        # partition
        # 3. It creates a Kafka consumer (reader) for that partition.
        # If any of these steps fail, they will be re-attempted every
        #  for a total of , and then every
        #  for a total of  until they succeed.
        # Note that the orderer will be unable to write to or read from a
        # channel until all of the steps above have been completed successfully.
        ShortInterval: 5s
        ShortTotal: 10m
        LongInterval: 5m
        LongTotal: 12h
        # Affects the socket timeouts when waiting for an initial connection, a
        # response, or a transmission. See Config.Net for more info:
        # https://godoc.org/github.com/Shopify/sarama#Config
        NetworkTimeouts:
            DialTimeout: 10s
            ReadTimeout: 10s
            WriteTimeout: 10s
        # Affects the metadata requests when the Kafka cluster is in the middle
        # of a leader election.See Config.Metadata for more info:
        # https://godoc.org/github.com/Shopify/sarama#Config
        Metadata:
            RetryBackoff: 250ms
            RetryMax: 3
        # What to do if posting a message to the Kafka cluster fails. See
        # Config.Producer for more info:
        # https://godoc.org/github.com/Shopify/sarama#Config
        Producer:
            RetryBackoff: 100ms
            RetryMax: 3
        # What to do if reading from the Kafka cluster fails. See
        # Config.Consumer for more info:
        # https://godoc.org/github.com/Shopify/sarama#Config
        Consumer:
            RetryBackoff: 2s
    # Settings to use when creating Kafka topics.  Only applies when
    # Kafka.Version is v0.10.1.0 or higher
    Topic:
        # The number of Kafka brokers across which to replicate the topic
        ReplicationFactor: 3
    # Verbose: Enable logging for interactions with the Kafka cluster.
    Verbose: false

    # TLS: TLS settings for the orderer's connection to the Kafka cluster.
    TLS:

      # Enabled: Use TLS when connecting to the Kafka cluster.
      Enabled: false

      # PrivateKey: PEM-encoded private key the orderer will use for
      # authentication.
      PrivateKey:
        # As an alternative to specifying the PrivateKey here, uncomment the
        # following "File" key and specify the file name from which to load the
        # value of PrivateKey.
        #File: path/to/PrivateKey

      # Certificate: PEM-encoded signed public key certificate the orderer will
      # use for authentication.
      Certificate:
        # As an alternative to specifying the Certificate here, uncomment the
        # following "File" key and specify the file name from which to load the
        # value of Certificate.
        #File: path/to/Certificate

      # RootCAs: PEM-encoded trusted root certificates used to validate
      # certificates from the Kafka cluster.
      RootCAs:
        # As an alternative to specifying the RootCAs here, uncomment the
        # following "File" key and specify the file name from which to load the
        # value of RootCAs.
        #File: path/to/RootCAs

    # SASLPlain: Settings for using SASL/PLAIN authentication with Kafka brokers
    SASLPlain:
      # Enabled: Use SASL/PLAIN to authenticate with Kafka brokers
      Enabled: false
      # User: Required when Enabled is set to true
      User:
      # Password: Required when Enabled is set to true
      Password:

    # Kafka protocol version used to communicate with the Kafka cluster brokers
    # (defaults to 0.10.2.0 if not specified)
    Version:

################################################################################
#
#   Debug Configuration
#
#   - This controls the debugging options for the orderer
#
################################################################################
Debug:

    # BroadcastTraceDir when set will cause each request to the Broadcast service
    # for this orderer to be written to a file in this directory
    BroadcastTraceDir:

    # DeliverTraceDir when set will cause each request to the Deliver service
    # for this orderer to be written to a file in this directory
    DeliverTraceDir:

################################################################################
#
#   Operations Configuration
#
#   - This configures the operations server endpoint for the orderer
#
################################################################################
Operations:
    # host and port for the operations server
    ListenAddress: 127.0.0.1:8443

    # TLS configuration for the operations endpoint
    TLS:
        # TLS enabled
        Enabled: false

        # Certificate is the location of the PEM encoded TLS certificate
        Certificate:

        # PrivateKey points to the location of the PEM-encoded key
        PrivateKey:

        # Require client certificate authentication to access all resources
        ClientAuthRequired: false

        # Paths to PEM encoded ca certificates to trust for client authentication
        ClientRootCAs: []

################################################################################
#
#   Metrics  Configuration
#
#   - This configures metrics collection for the orderer
#
################################################################################
Metrics:
    # The metrics provider is one of statsd, prometheus, or disabled
    Provider: disabled

    # The statsd configuration
    Statsd:
      # network type: tcp or udp
      Network: udp

      # the statsd server address
      Address: 127.0.0.1:8125

      # The interval at which locally cached counters and gauges are pushed
      # to statsd; timings are pushed immediately
      WriteInterval: 30s

      # The prefix is prepended to all emitted statsd metrics
      Prefix:

################################################################################
#
#   Consensus Configuration
#
#   - This section contains config options for a consensus plugin. It is opaque
#     to orderer, and completely up to consensus implementation to make use of.
#
################################################################################
Consensus:
    # The allowed key-value pairs here depend on consensus plugin. For etcd/raft,
    # we use following options:

    # WALDir specifies the location at which Write Ahead Logs for etcd/raft are
    # stored. Each channel will have its own subdir named after channel ID.
    WALDir: /opt/gocode/src/github.com/hyperledger/certificate/etcdraft/wal

    # SnapDir specifies the location at which snapshots for etcd/raft are
    # stored. Each channel will have its own subdir named after channel ID.
    SnapDir: /opt/gocode/src/github.com/hyperledger/certificate/etcdraft/snapshot

Orderer节点负责交易的打包和区块的生成。Orderer节点的配置信息通常放在环境变量或者配置文件中,本例中的配置信息统一存放在配置文件中。在Fabric源码提供了Orderer启动所用到的配置文件的实例,将示例配置文件复制到Orderer的文件夹下面修改即可使用。

复制配置文件到Orderer文件夹的命令如下所示:

cd $GOPATH/src/github.com/hyperledger/peer
cp $GOPATH/src/github.com/hyperledger/fabric/sampleconfig/orderer.yaml $GOPATH/src/github.com/hyperledger/order

在模板配置文件上稍加修改即可满足本例使用,由于篇幅本例中我们只列出需要修改的部分。修改后配置文件中发生变化的内容如下:

要注意配置文件中的相关路径

  • TLS 证书的路径
  • 创世快文件的路径
  • GenesisFile

在配置文件orderer.yaml所在的目录执行如下命令启动orderer

orderer start
  • 问题:Not bootstrapping because of 2 existing channels

删除目录下的orderer遗留文件

==4.4.5 Peer节点的启动==

Peer模块是Fabric的核心节点,所有的交易数据经过Orderer排序打包之后由Peer模块存储在区块链中。所有的Chaincode也是有Peer模块打包并且激活的。Peer模块的配置信息同样由环境变量和配置文件组成,本例中我们采用配置文的方式来配置peer节点的参数。在设定配置文件之前需要创建一个文件夹存放Peer模块的配置文件和区块数据。在Fabric源码中同样提供了Peer模块配置文件的示例,将示例配置文件复制到Peer模块的文件夹下面修改即可使用。

在模板配置文件上core.yaml稍加修改即可使用,
core.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

###############################################################################
#
#    Peer section
#
###############################################################################
peer:

    # The Peer id is used for identifying this Peer instance.
    id: peer0.org1.binny.com

    # The networkId allows for logical seperation of networks
    networkId: dev

    # The Address at local network interface this Peer will listen on.
    # By default, it will listen on all network interfaces
    listenAddress: 0.0.0.0:7051

    # The endpoint this peer uses to listen for inbound chaincode connections.
    # If this is commented-out, the listen address is selected to be
    # the peer's address (see below) with port 7052
    chaincodeListenAddress: 0.0.0.0:7052

    # The endpoint the chaincode for this peer uses to connect to the peer.
    # If this is not specified, the chaincodeListenAddress address is selected.
    # And if chaincodeListenAddress is not specified, address is selected from
    # peer listenAddress.
    # chaincodeAddress: 0.0.0.0:7052

    # When used as peer config, this represents the endpoint to other peers
    # in the same organization. For peers in other organization, see
    # gossip.externalEndpoint for more info.
    # When used as CLI config, this means the peer's endpoint to interact with
    address: peer0.org1.binny.com:7051

    # Whether the Peer should programmatically determine its address
    # This case is useful for docker containers.
    addressAutoDetect: false
    gomaxprocs: -1

    # Keepalive settings for peer server and clients
    keepalive:
        # MinInterval is the minimum permitted time between client pings.
        # If clients send pings more frequently, the peer server will
        # disconnect them
        minInterval: 60s
        # Client keepalive settings for communicating with other peer nodes
        client:
            # Interval is the time between pings to peer nodes.  This must
            # greater than or equal to the minInterval specified by peer
            # nodes
            interval: 60s
            # Timeout is the duration the client waits for a response from
            # peer nodes before closing the connection
            timeout: 20s
        # DeliveryClient keepalive settings for communication with ordering
        # nodes.
        deliveryClient:
            # Interval is the time between pings to ordering nodes.  This must
            # greater than or equal to the minInterval specified by ordering
            # nodes.
            interval: 60s
            # Timeout is the duration the client waits for a response from
            # ordering nodes before closing the connection
            timeout: 20s


    # Gossip related configuration
    gossip:
        # Bootstrap set to initialize gossip with.
        # This is a list of other peers that this peer reaches out to at startup.
        # Important: The endpoints here have to be endpoints of peers in the same
        # organization, because the peer would refuse connecting to these endpoints
        # unless they are in the same organization as the peer.
        bootstrap: peer0.org1.binny.com:7051

        # NOTE: orgLeader and useLeaderElection parameters are mutual exclusive.
        # Setting both to true would result in the termination of the peer
        # since this is undefined state. If the peers are configured with
        # useLeaderElection=false, make sure there is at least 1 peer in the
        # organization that its orgLeader is set to true.

        # Defines whenever peer will initialize dynamic algorithm for
        # "leader" selection, where leader is the peer to establish
        # connection with ordering service and use delivery protocol
        # to pull ledger blocks from ordering service. It is recommended to
        # use leader election for large networks of peers.
        useLeaderElection: true
        # Statically defines peer to be an organization "leader",
        # where this means that current peer will maintain connection
        # with ordering service and disseminate block across peers in
        # its own organization
        orgLeader: false

        # Overrides the endpoint that the peer publishes to peers
        # in its organization. For peers in foreign organizations
        # see 'externalEndpoint'
        endpoint:
        # Maximum count of blocks stored in memory
        maxBlockCountToStore: 100
        # Max time between consecutive message pushes(unit: millisecond)
        maxPropagationBurstLatency: 10ms
        # Max number of messages stored until a push is triggered to remote peers
        maxPropagationBurstSize: 10
        # Number of times a message is pushed to remote peers
        propagateIterations: 1
        # Number of peers selected to push messages to
        propagatePeerNum: 3
        # Determines frequency of pull phases(unit: second)
        # Must be greater than digestWaitTime + responseWaitTime
        pullInterval: 4s
        # Number of peers to pull from
        pullPeerNum: 3
        # Determines frequency of pulling state info messages from peers(unit: second)
        requestStateInfoInterval: 4s
        # Determines frequency of pushing state info messages to peers(unit: second)
        publishStateInfoInterval: 4s
        # Maximum time a stateInfo message is kept until expired
        stateInfoRetentionInterval:
        # Time from startup certificates are included in Alive messages(unit: second)
        publishCertPeriod: 10s
        # Should we skip verifying block messages or not (currently not in use)
        skipBlockVerification: false
        # Dial timeout(unit: second)
        dialTimeout: 3s
        # Connection timeout(unit: second)
        connTimeout: 2s
        # Buffer size of received messages
        recvBuffSize: 20
        # Buffer size of sending messages
        sendBuffSize: 200
        # Time to wait before pull engine processes incoming digests (unit: second)
        # Should be slightly smaller than requestWaitTime
        digestWaitTime: 1s
        # Time to wait before pull engine removes incoming nonce (unit: milliseconds)
        # Should be slightly bigger than digestWaitTime
        requestWaitTime: 1500ms
        # Time to wait before pull engine ends pull (unit: second)
        responseWaitTime: 2s
        # Alive check interval(unit: second)
        aliveTimeInterval: 5s
        # Alive expiration timeout(unit: second)
        aliveExpirationTimeout: 25s
        # Reconnect interval(unit: second)
        reconnectInterval: 25s
        # This is an endpoint that is published to peers outside of the organization.
        # If this isn't set, the peer will not be known to other organizations.
        externalEndpoint: peer0.org1.binny.com:7051
        # Leader election service configuration
        election:
            # Longest time peer waits for stable membership during leader election startup (unit: second)
            startupGracePeriod: 15s
            # Interval gossip membership samples to check its stability (unit: second)
            membershipSampleInterval: 1s
            # Time passes since last declaration message before peer decides to perform leader election (unit: second)
            leaderAliveThreshold: 10s
            # Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second)
            leaderElectionDuration: 5s

        pvtData:
            # pullRetryThreshold determines the maximum duration of time private data corresponding for a given block
            # would be attempted to be pulled from peers until the block would be committed without the private data
            pullRetryThreshold: 60s
            # As private data enters the transient store, it is associated with the peer's ledger's height at that time.
            # transientstoreMaxBlockRetention defines the maximum difference between the current ledger's height upon commit,
            # and the private data residing inside the transient store that is guaranteed not to be purged.
            # Private data is purged from the transient store when blocks with sequences that are multiples
            # of transientstoreMaxBlockRetention are committed.
            transientstoreMaxBlockRetention: 1000
            # pushAckTimeout is the maximum time to wait for an acknowledgement from each peer
            # at private data push at endorsement time.
            pushAckTimeout: 3s
            # Block to live pulling margin, used as a buffer
            # to prevent peer from trying to pull private data
            # from peers that is soon to be purged in next N blocks.
            # This helps a newly joined peer catch up to current
            # blockchain height quicker.
            btlPullMargin: 10
            # the process of reconciliation is done in an endless loop, while in each iteration reconciler tries to
            # pull from the other peers the most recent missing blocks with a maximum batch size limitation.
            # reconcileBatchSize determines the maximum batch size of missing private data that will be reconciled in a
            # single iteration.
            reconcileBatchSize: 10
            # reconcileSleepInterval determines the time reconciler sleeps from end of an iteration until the beginning
            # of the next reconciliation iteration.
            reconcileSleepInterval: 1m
            # reconciliationEnabled is a flag that indicates whether private data reconciliation is enable or not.
            reconciliationEnabled: true
            # skipPullingInvalidTransactionsDuringCommit is a flag that indicates whether pulling of invalid
            # transaction's private data from other peers need to be skipped during the commit time and pulled
            # only through reconciler.
            skipPullingInvalidTransactionsDuringCommit: false

        # Gossip state transfer related configuration
        state:
            # indicates whenever state transfer is enabled or not
            # default value is true, i.e. state transfer is active
            # and takes care to sync up missing blocks allowing
            # lagging peer to catch up to speed with rest network
            enabled: true
            # checkInterval interval to check whether peer is lagging behind enough to
            # request blocks via state transfer from another peer.
            checkInterval: 10s
            # responseTimeout amount of time to wait for state transfer response from
            # other peers
            responseTimeout: 3s
            # batchSize the number of blocks to request via state transfer from another peer
            batchSize: 10
            # blockBufferSize reflect the maximum distance between lowest and
            # highest block sequence number state buffer to avoid holes.
            # In order to ensure absence of the holes actual buffer size
            # is twice of this distance
            blockBufferSize: 100
            # maxRetries maximum number of re-tries to ask
            # for single state transfer request
            maxRetries: 3

    # TLS Settings
    # Note that peer-chaincode connections through chaincodeListenAddress is
    # not mutual TLS auth. See comments on chaincodeListenAddress for more info
    tls:
        # Require server-side TLS
        enabled:  false
        # Require client certificates / mutual TLS.
        # Note that clients that are not configured to use a certificate will
        # fail to connect to the peer.
        clientAuthRequired: false
        # X.509 certificate used for TLS server
        cert:
            file: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/peers/peer0.org1.binny.com/tls/server.crt
        # Private key used for TLS server (and client if clientAuthEnabled
        # is set to true
        key:
            file: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/peers/peer0.org1.binny.com/tls/server.key
        # Trusted root certificate chain for tls.cert
        rootcert:
            file: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/peers/peer0.org1.binny.com/tls/ca.crt
    fileSystemPath: /opt/gocode/src/github.com/hyperledger/peer/production
    BCCSP:
        Default: SW
        # Settings for the SW certificate provider (i.e. when DEFAULT: SW)
        SW:
            # TODO: The default Hash and Security level needs refactoring to be
            # fully configurable. Changing these defaults requires coordination
            # SHA2 is hardcoded in several places, not only BCCSP
            Hash: SHA2
            Security: 256
            # Location of Key Store
            FileKeyStore:
                # If "", defaults to 'mspConfigPath'/keystore
                KeyStore:
        # Settings for the PKCS#11 certificate provider (i.e. when DEFAULT: PKCS11)
        PKCS11:
            # Location of the PKCS11 module library
            Library:
            # Token Label
            Label:
            # User PIN
            Pin:
            Hash:
            Security:
            FileKeyStore:
                KeyStore:

    # Path on the file system where peer will find MSP local configurations
    mspConfigPath: /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/peers/peer0.org1.binny.com/msp

    # Identifier of the local MSP
    # ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!----
    # Deployers need to change the value of the localMspId string.
    # In particular, the name of the local MSP ID of a peer needs
    # to match the name of one of the MSPs in each of the channel
    # that this peer is a member of. Otherwise this peer's messages
    # will not be identified as valid by other nodes.
    localMspId: Org1MSP

    # CLI common client config options
    client:
        # connection timeout
        connTimeout: 3s

    # Delivery service related config
    deliveryclient:
        # It sets the total time the delivery service may spend in reconnection
        # attempts until its retry logic gives up and returns an error
        reconnectTotalTimeThreshold: 3600s

        # It sets the delivery service <-> ordering service node connection timeout
        connTimeout: 3s

        # It sets the delivery service maximal delay between consecutive retries
        reConnectBackoffThreshold: 3600s

    # Type for the local MSP - by default it's of type bccsp
    localMspType: bccsp

    # Used with Go profiling tools only in none production environment. In
    # production, it should be disabled (eg enabled: false)
    profile:
        enabled:     false
        listenAddress: 0.0.0.0:6060

    # The admin service is used for administrative operations such as
    # control over log module severity, etc.
    # Only peer administrators can use the service.
    adminService:
        # The interface and port on which the admin server will listen on.
        # If this is commented out, or the port number is equal to the port
        # of the peer listen address - the admin service is attached to the
        # peer's service (defaults to 7051).
        #listenAddress: 0.0.0.0:7055

    # Handlers defines custom handlers that can filter and mutate
    # objects passing within the peer, such as:
    #   Auth filter - reject or forward proposals from clients
    #   Decorators  - append or mutate the chaincode input passed to the chaincode
    #   Endorsers   - Custom signing over proposal response payload and its mutation
    # Valid handler definition contains:
    #   - A name which is a factory method name defined in
    #     core/handlers/library/library.go for statically compiled handlers
    #   - library path to shared object binary for pluggable filters
    # Auth filters and decorators are chained and executed in the order that
    # they are defined. For example:
    # authFilters:
    #   -
    #     name: FilterOne
    #     library: /opt/lib/filter.so
    #   -
    #     name: FilterTwo
    # decorators:
    #   -
    #     name: DecoratorOne
    #   -
    #     name: DecoratorTwo
    #     library: /opt/lib/decorator.so
    # Endorsers are configured as a map that its keys are the endorsement system chaincodes that are being overridden.
    # Below is an example that overrides the default ESCC and uses an endorsement plugin that has the same functionality
    # as the default ESCC.
    # If the 'library' property is missing, the name is used as the constructor method in the builtin library similar
    # to auth filters and decorators.
    # endorsers:
    #   escc:
    #     name: DefaultESCC
    #     library: /etc/hyperledger/fabric/plugin/escc.so
    handlers:
        authFilters:
          -
            name: DefaultAuth
          -
            name: ExpirationCheck    # This filter checks identity x509 certificate expiration
        decorators:
          -
            name: DefaultDecorator
        endorsers:
          escc:
            name: DefaultEndorsement
            library:
        validators:
          vscc:
            name: DefaultValidation
            library:

    #    library: /etc/hyperledger/fabric/plugin/escc.so
    # Number of goroutines that will execute transaction validation in parallel.
    # By default, the peer chooses the number of CPUs on the machine. Set this
    # variable to override that choice.
    # NOTE: overriding this value might negatively influence the performance of
    # the peer so please change this value only if you know what you're doing
    validatorPoolSize:

    # The discovery service is used by clients to query information about peers,
    # such as - which peers have joined a certain channel, what is the latest
    # channel config, and most importantly - given a chaincode and a channel,
    # what possible sets of peers satisfy the endorsement policy.
    discovery:
        enabled: true
        # Whether the authentication cache is enabled or not.
        authCacheEnabled: true
        # The maximum size of the cache, after which a purge takes place
        authCacheMaxSize: 1000
        # The proportion (0 to 1) of entries that remain in the cache after the cache is purged due to overpopulation
        authCachePurgeRetentionRatio: 0.75
        # Whether to allow non-admins to perform non channel scoped queries.
        # When this is false, it means that only peer admins can perform non channel scoped queries.
        orgMembersAllowedAccess: false
###############################################################################
#
#    VM section
#
###############################################################################
vm:

    # Endpoint of the vm management system.  For docker can be one of the following in general
    # unix:///var/run/docker.sock
    # http://localhost:2375
    # https://localhost:2376
    endpoint: unix:///var/run/docker.sock

    # settings for docker vms
    docker:
        tls:
            enabled: false
            ca:
                file: docker/ca.crt
            cert:
                file: docker/tls.crt
            key:
                file: docker/tls.key

        # Enables/disables the standard out/err from chaincode containers for
        # debugging purposes
        attachStdout: false

        # Parameters on creating docker container.
        # Container may be efficiently created using ipam & dns-server for cluster
        # NetworkMode - sets the networking mode for the container. Supported
        # standard values are: `host`(default),`bridge`,`ipvlan`,`none`.
        # Dns - a list of DNS servers for the container to use.
        # Note:  `Privileged` `Binds` `Links` and `PortBindings` properties of
        # Docker Host Config are not supported and will not be used if set.
        # LogConfig - sets the logging driver (Type) and related options
        # (Config) for Docker. For more info,
        # https://docs.docker.com/engine/admin/logging/overview/
        # Note: Set LogConfig using Environment Variables is not supported.
        hostConfig:
            NetworkMode: host
            Dns:
               # - 192.168.0.1
            LogConfig:
                Type: json-file
                Config:
                    max-size: "50m"
                    max-file: "5"
            Memory: 2147483648

###############################################################################
#
#    Chaincode section
#
###############################################################################
chaincode:

    # The id is used by the Chaincode stub to register the executing Chaincode
    # ID with the Peer and is generally supplied through ENV variables
    # the `path` form of ID is provided when installing the chaincode.
    # The `name` is used for all other requests and can be any string.
    id:
        path:
        name:

    # Generic builder environment, suitable for most chaincode types
    builder: $(DOCKER_NS)/fabric-ccenv:latest

    # Enables/disables force pulling of the base docker images (listed below)
    # during user chaincode instantiation.
    # Useful when using moving image tags (such as :latest)
    pull: false

    golang:
        # golang will never need more than baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

        # whether or not golang chaincode should be linked dynamically
        dynamicLink: false

    car:
        # car may need more facilities (JVM, etc) in the future as the catalog
        # of platforms are expanded.  For now, we can just use baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    java:
        # This is an image based on java:openjdk-8 with addition compiler
        # tools added for java shim layer packaging.
        # This image is packed with shim layer libraries that are necessary
        # for Java chaincode runtime.
        runtime: $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)

    node:
        # need node.js engine at runtime, currently available in baseimage
        # but not in baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseimage:$(ARCH)-$(BASE_VERSION)

    # Timeout duration for starting up a container and waiting for Register
    # to come through. 1sec should be plenty for chaincode unit tests
    startuptimeout: 300s

    # Timeout duration for Invoke and Init calls to prevent runaway.
    # This timeout is used by all chaincodes in all the channels, including
    # system chaincodes.
    # Note that during Invoke, if the image is not available (e.g. being
    # cleaned up when in development environment), the peer will automatically
    # build the image, which might take more time. In production environment,
    # the chaincode image is unlikely to be deleted, so the timeout could be
    # reduced accordingly.
    executetimeout: 30s

    # There are 2 modes: "dev" and "net".
    # In dev mode, user runs the chaincode after starting peer from
    # command line on local machine.
    # In net mode, peer will run chaincode in a docker container.
    mode: net

    # keepalive in seconds. In situations where the communiction goes through a
    # proxy that does not support keep-alive, this parameter will maintain connection
    # between peer and chaincode.
    # A value <= 0 turns keepalive off
    keepalive: 0

    # system chaincodes whitelist. To add system chaincode "myscc" to the
    # whitelist, add "myscc: enable" to the list below, and register in
    # chaincode/importsysccs.go
    system:
        +lifecycle: enable
        cscc: enable
        lscc: enable
        escc: enable
        vscc: enable
        qscc: enable

    # System chaincode plugins: in addition to being imported and compiled
    # into fabric through core/chaincode/importsysccs.go, system chaincodes
    # can also be loaded as shared objects compiled as Go plugins.
    # See examples/plugins/scc for an example.
    # Like regular system chaincodes, plugins must also be white listed in the
    # chaincode.system section above.
    systemPlugins:
      # example configuration:
      # - enabled: true
      #   name: myscc
      #   path: /opt/lib/myscc.so
      #   invokableExternal: true
      #   invokableCC2CC: true

    # Logging section for the chaincode container
    logging:
      # Default level for all loggers within the chaincode container
      level:  info
      # Override default level for the 'shim' module
      shim:   warning
      # Format for the chaincode container logs
      format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

###############################################################################
#
#    Ledger section - ledger configuration encompases both the blockchain
#    and the state
#
###############################################################################
ledger:

  blockchain:

  state:
    # stateDatabase - options are "goleveldb", "CouchDB"
    # goleveldb - default state database stored in goleveldb.
    # CouchDB - store state database in CouchDB
    stateDatabase: goleveldb
    # Limit on the number of records to return per query
    totalQueryLimit: 100000
    couchDBConfig:
       # It is recommended to run CouchDB on the same server as the peer, and
       # not map the CouchDB container port to a server port in docker-compose.
       # Otherwise proper security must be provided on the connection between
       # CouchDB client (on the peer) and server.
       couchDBAddress: 127.0.0.1:5984
       # This username must have read and write authority on CouchDB
       username:
       # The password is recommended to pass as an environment variable
       # during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD).
       # If it is stored here, the file must be access control protected
       # to prevent unintended users from discovering the password.
       password:
       # Number of retries for CouchDB errors
       maxRetries: 3
       # Number of retries for CouchDB errors during peer startup
       maxRetriesOnStartup: 12
       # CouchDB request timeout (unit: duration, e.g. 20s)
       requestTimeout: 35s
       # Limit on the number of records per each CouchDB query
       # Note that chaincode queries are only bound by totalQueryLimit.
       # Internally the chaincode may execute multiple CouchDB queries,
       # each of size internalQueryLimit.
       internalQueryLimit: 1000
       # Limit on the number of records per CouchDB bulk update batch
       maxBatchUpdateSize: 1000
       # Warm indexes after every N blocks.
       # This option warms any indexes that have been
       # deployed to CouchDB after every N blocks.
       # A value of 1 will warm indexes after every block commit,
       # to ensure fast selector queries.
       # Increasing the value may improve write efficiency of peer and CouchDB,
       # but may degrade query response time.
       warmIndexesAfterNBlocks: 1
       # Create the _global_changes system database
       # This is optional.  Creating the global changes database will require
       # additional system resources to track changes and maintain the database
       createGlobalChangesDB: false

  history:
    # enableHistoryDatabase - options are true or false
    # Indicates if the history of key updates should be stored.
    # All history 'index' will be stored in goleveldb, regardless if using
    # CouchDB or alternate database for the state.
    enableHistoryDatabase: true
###############################################################################
#
#    Operations section
#
###############################################################################
operations:
    # host and port for the operations server
    listenAddress: 127.0.0.1:9443

    # TLS configuration for the operations endpoint
    tls:
        # TLS enabled
        enabled: false

        # path to PEM encoded server certificate for the operations server
        cert:
            file:

        # path to PEM encoded server key for the operations server
        key:
            file:

        # require client certificate authentication to access all resources
        clientAuthRequired: false

        # paths to PEM encoded ca certificates to trust for client authentication
        clientRootCAs:
            files: []
###############################################################################
#
#    Metrics section
#
#
###############################################################################
metrics:
    # metrics provider is one of statsd, prometheus, or disabled
    provider: disabled

    # statsd configuration
    statsd:
        # network type: tcp or udp
        network: udp

        # statsd server address
        address: 127.0.0.1:8125

        # the interval at which locally cached counters and gauges are pushed
        # to statsd; timings are pushed immediately
        writeInterval: 10s

        # prefix is prepended to all emitted statsd metrics
        prefix:

在配置文件core.yaml所在的文件夹中执行以下命令启动order节点

export set FABRIC_CFG_PATH=$GOPATH/src/github.com/hyperledger/certificate
  • Please make sure that FABRIC_CFG_PATH is set to a path which contains core.yaml
peer node start >> log_peer.log 2>&1 &
ps axu | grep peer

4.4.6 创建通道

现在我们可以创建通道,创建通道的过程一共需要三个步骤。

==第一步: 创建通道==

export set CHANNEL_NAME=binnychannel
cd $GOPATH/src/github.com/hyperledger/certificate
export set CORE_PEER_LOCALMSPID=Org1MSP
export set CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/users/[email protected]/msp
peer channel create -t 50s -o orderer.binny.com:7050 -c $CHANNEL_NAME  -f  /opt/gocode/src/github.com/hyperledger/certificate/binnychannel.tx
  • -f:指定上次生成的那个文件路径
2020-05-12 01:32:47.537 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-05-12 01:32:47.568 UTC [cli.common] readBlock -> INFO 002 Received block: 0

创建通道完成之后,会在执行命令的当前目录生成名为binnychannel.block的通道创始块文件

出现过的问题

Error: failed to create deliver client: orderer client failed to connect to orderer.binny.com:7050: failed to create new connection: context deadline exceeded

  • 原因一:hosts 中的映射改为 127.0.0.1
127.0.0.1 orderer.binny.com
127.0.0.1 peer0.org1.binny.com
127.0.0.1 peer1.org1.binny.com
127.0.0.1 peer0.org2.binny.com
127.0.0.1 peer1.org2.binny.com
  • 原因二:就是配置文件中没有开启 TSL,执行命令时,设置这个选项为 true 导致的

  • 查看已开启的端口
netstat -tupln

  • ==查看channel.tx和genesis.block内容==
configtxgen -inspectChannelCreateTx binnychannel.tx
configtxgen -inspectBlock genesis.block

peer channel create -t 50s -o orderer.binny.com:7050 -c binnychanne --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/ordererOrganizations/binny.com/msp/tlscacerts/tlsca.binny.com-cert.pem -f  /opt/gocode/src/github.com/hyperledger/certificate/binnychannel.tx

重复执行会导致下面错误:

Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'binnychannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1


如果配置文件中 TSL 开启,则执行下面步骤

export set CORE_PEER_TLS_ENABLED=true
peer channel create -t 50s -o orderer.binny.com:7050 -c biterchannel --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gocode/src/github.com/hyperledger/certificate/crypto-config/ordererOrganizations/binny.com/msp/tlscacerts/tlsca.binny.com-cert.pem -f  /opt/gocode/src/github.com/hyperledger/certificate/binnychannel.tx
选项 意义
-o 连接的orderer的地址,hostname:port
-c channel的名称,默认为mychannel
-f 配置的交易信息(暂时还没搞清楚)
–tls 和orderer通信时是否启用tls
–cafile 使用tls时,所使用的orderer的证书

第二步:让已经运行的Peer模块加入通道

export set CORE_PEER_LOCALMSPID=Org1MSP
export set CORE_PEER_ADDRESS=peer0.org1.binny.com:7051
export set CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/users/[email protected]/msp
peer channel join -b $GOPATH/src/github.com/hyperledger/certificate/binnychannel.block

在上述创建通道的命令中-b后面的参数为第一步中生成的文件binnychannel.block,需要注意这个文件的路径。

peer channel list

第三步:更新锚节点

export set CORE_PEER_LOCALMSPID=Org1MSP
export set CORE_PEER_ADDRESS=peer0.org1.binny.com:7051
export set CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/certificate/certificate-config/peerOrganizations/org1.binny.com/users/[email protected]/msp
peer channel update -o orderer.binny.com:7050 -c binnychannel   -f  $GOPATH/src/github.com/hyperledger/certificate/Org1MSPanchors.tx

4.4.7 Chaincode的部署和调用

现在可以部署一个Chaincode(关于Chaincode的详细内容在本书的第七章会有详细的介绍)来测试Peer节点和Orderer节点的部署是否正确。这里采用Fabric源码自带的例子来作为测试Chaincode。测试用Chaincode的源代码路径如下所示:

$GOPATH/src/github.com/hyperledger/fabric-samples/chaincode/chaincode_example02

Chaincode相关的测试一共有四个步骤。

==第一步:设置环境变量==

export set CORE_PEER_LOCALMSPID=Org1MSP
export set CORE_PEER_ADDRESS=peer0.org1.binny.com:7051
export set CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/users/[email protected]/msp

==第二步: 部署chaincode代码==,使用的chaincode的位置如下:

$GOPATH/src/github.com/hyperledger/fabric/examples/chaincode/go/example02/cmd

安装部署

peer chaincode install -n binny_test_01 -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/example02/cmd
2020-05-12 07:21:24.559 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
2020-05-12 07:21:24.559 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
2020-05-12 07:21:25.482 UTC [chaincodeCmd] install -> INFO 003 Installed remotely response: 

可以通过下面命令查看 chaincode的安装信息

peer chaincode list --installed  

==第三步: 实例化chaincode代码==
执行下面的步骤,需要系统中已安装 docker

export set CHANNEL_NAME=binnychannel
peer chaincode instantiate -o  orderer.binny.com:7050 -C $CHANNEL_NAME -n binny_test_01 -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P "OR    ('Org1MSP.member','Org2MSP.member')"

==第四步:通过chaincode写入数据==

==设置临时变量,channel 名字==

export set CHANNEL_NAME=binnychannel
export set CORE_PEER_LOCALMSPID=Org1MSP
export set CORE_PEER_ADDRESS=peer0.org1.binny.com:7051
export set CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/certificate/crypto-config/peerOrganizations/org1.binny.com/users/[email protected]/msp
peer chaincode invoke -o orderer.binny.com:7050 -C $CHANNEL_NAME -n binny_test_01 -c '{"Args":["invoke","a","b","1"]}'

第四步:通过chaincode查询数据

export set CORE_PEER_LOCALMSPID=Org1MSP
export set CORE_PEER_ADDRESS=peer0.org1.binny.com:7051
export set CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/certificate/certificate-config/peerOrganizations/org1.binny.com/users/[email protected]/msp
peer chaincode query -C $CHANNEL_NAME -n binny_test_01 -c '{"Args":["query","a"]}'
peer chaincode query -C $CHANNEL_NAME -n binny_test_01 -c '{"Args":["query","b"]}'

如果上述命令都能正确执行,那么一个简单的Fabric系统就已经部署完成了。

你可能感兴趣的:(fabric简介和快速安装)