Binder在Native框架层的实现

转载请标明出处: 

http://blog.csdn.net/yujun411522/article/details/46418491
本文出自:【yujun411522的博客】



在linux中不同的进程之间是相互隔离的,如果需要通信就需要通过进程间通信(Inter Process Communication)IPC机制来进行。linux中间接通信的方式主要有signal,pipe,message queue,semaphore,shared memory等,这些机制效率低下或不适合封装所以没有大规模使用,反倒是Binder机制的大量使用。Binder是android对linux的一个拓展,属于一个字符设备,通过这个设备实现不同进程之间的间接通信。分为Native层和Java层,先看Native层。
在android中有各种各样的服务Service,这些Service所在的进程称为Server进程,使用这些Service的叫做Client进程,这是一个典型的C/S结构的,不过这里的C/S架构又多了一些其他的东西:
Client:使用Service的一方
Server:提供service的一方
Proxy:位于Client端,提供访问服务接口,主要是为了屏蔽client和server端通信的细节
stub:位于server端,作用和proxy类似,也是屏蔽proxy与server端通信的细节,相当于service的代理

5.1 ServiceManager的启动过程
因为在android中用到的服务很多,所以为了管理起来方便,增加一个组件ServiceManager,该组件可以完成Service注册和检索。Service在启动时要将信息注册到ServiceManager中。
其中Client、Server和ServiceManager的关系如下图所示:
Binder在Native框架层的实现_第1张图片
1 注册服务:启动一个Service之后就需要将该service注册到ServiceManager中,这时server是ServiceManager的client,而ServiceMananger是server。
2 查询服务:当client需要使用某一个服务是,只需要向ServiceManager提供一个服务名称,ServiceManager就会返回该服务给client,这是ServiceManager相当于server
3 使用服务:当client已经与service建立连接通路之后,就可以直接使用service对象,这时service相当于server
4 最最重要的一点:这三者都是使用binder来实现,任意两者之间的通信都是Binder作为基础。

下面先看ServiceManager的启动
ServiceManager中维护了一个Service信息的列表。所以在Client需要服务时只需要向ServiceManager提供Service的名称即可。ServiceManager的启动在init.rc文件中已经配置过了:
service servicemanager /system/bin/servicemanager
    class core   #class级别属于core级别
    user system  #用户和用户组属于system
    group system
    critical     #非常重要的服务,如果一段时间重启次数过多,系统就会重启
    onrestart restart zygote     #如果重启则导致zygote、media重启,
    onrestart restart media
ServiceManager对应的程序是frameworks/base/cmds/servicemanager/service_manager.c,直接看它的main方法:
int main(int argc, char **argv)
{
    struct binder_state *bs;
    void *svcmgr = BINDER_SERVICE_MANAGER;
     // 1.打开binder设备并用了IPC通信
    bs = binder_open(128*1024);
    // 2.注册为context manager 
    if (binder_become_context_manager(bs)) {
        LOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }

    svcmgr_handle = svcmgr;
     //进入死循环等待IPC数据
    binder_loop(bs, svcmgr_handler);
    return 0;
}
ServiceManager的启动工程主要三个部分:
1.初始化binder并调用binder_open函数打开binder并映射共享内存
2.调用binder_become_context_manager注册为context manager
3.调用binder_loop进入无限等待处理IPC通信

5.1.1 binder_open函数打开binder并映射共享内存
binder_open函数主要用于初始化binder通信,在frameworks/base/cmds/servicemanager/binder.c文件中:
struct binder_state *binder_open(unsigned mapsize)
{
     //mapsize =128*1024 ,128K
     //创建binder_state结构体,并分配内存
    struct binder_state *bs;
    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return 0;
    }
    //以读写方式打开binder设备
    bs->fd = open("/dev/binder", O_RDWR);
    if (bs->fd < 0) {
        fprintf(stderr,"binder: cannot open device (%s)\n",
                strerror(errno));
        goto fail_open;
    }
     //将文件映射到当前进程的虚拟地址空间,也就是映射到ServiceManager进程中
    bs->mapsize = mapsize;
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)\n",
                strerror(errno));
        goto fail_map;
    }

        /* TODO: check version */

    return bs;

fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return 0;
}
其中涉及到了binder_state结构体:
struct binder_state
{
    int fd;//文件描述符
    void *mapped;//映射区的起始地址
    unsigned mapsize;//映射区的大小
};
上述函数做了三个工作:1 创建binder_state并分配空间;2以读写方式打开binder设备文件;3将设备文件映射到进程虚拟地址空间。
因为内核空间是可以共享的,可以在内核开辟缓冲区保存进程间的通信数据,这样就可以实现共享内存。在上面的函数中是通过open和mmap函数来实现的,首先open打开binder设备,然后将binder设备映射到进程的虚拟地址空间并通知在内核空间创建一个128k的缓冲区来保存IPC数据,所以这样就可以将进程中(ServiceManager进程)某个内存区域和内核空间的某个内存区域建立映射关系,servicemanager利用内核缓冲区共享数据。

5.1.2 binder_become_context_manager注册为context manager
打开binder并映射到内存之后,servicemanager会将自身注册为contextmanager:
int binder_become_context_manager(struct binder_state *bs)
{
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
ioctl函数又调用binder_ioctl,binder_ioctl函数在kernel/drivers/staging/android/binder.c中
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
     //参数为 bs->fd, BINDER_SET_CONTEXT_MGR, 0
        int ret;
        struct binder_proc *proc = filp->private_data;
        struct binder_thread *thread;
        unsigned int size = _IOC_SIZE(cmd);
        void __user *ubuf = (void __user *)arg;

        /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/

        ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
        if (ret)
                return ret;

        mutex_lock(&binder_lock);
        thread = binder_get_thread(proc);
        if (thread == NULL) {
                ret = -ENOMEM;
                goto err;
        }

         //这里cmd=BINDER_SET_CONTEXT_MGR
        switch (cmd) {
          .....       
         case BINDER_WRITE_READ:
         case BINDER_SET_MAX_THREADS:
         case BINDER_SET_CONTEXT_MGR:
         case BINDER_THREAD_EXIT:
               //只能有一个context mananger
                if (binder_context_mgr_node != NULL) {
                        ret = -EBUSY;
                        goto err;
                }
               //context manager已经创建了,判断是否是当前线程创建
                if (binder_context_mgr_uid != -1) {
                        if (binder_context_mgr_uid != current->cred->euid) {                               
                                ret = -EPERM;
                                goto err;
                        }
                } else
                        //没有创建,则创建
                        binder_context_mgr_uid = current->cred->euid;//更新uid
                         //创建一个context mananger node
                binder_context_mgr_node = binder_new_node(proc, NULL, NULL);
                if (binder_context_mgr_node == NULL) {
                        ret = -ENOMEM;
                        goto err;
                }
                //增加引用计数
                binder_context_mgr_node->local_weak_refs++;
                binder_context_mgr_node->local_strong_refs++;
                binder_context_mgr_node->has_strong_ref = 1;
                binder_context_mgr_node->has_weak_ref = 1;
                break;
      
        default:
                ret = -EINVAL;
                goto err;
        }
        ret = 0;
     ......
}
可以看出binder_ioctl函数可以用来处理各种类型的指令:BINDER_WRITE_READ,BINDER_SET_MAX_THREADS,BINDER_SET_CONTEXT_MGR,BINDER_THREAD_EXIT,后面会涉及到。

5.1.3 binder_loop进入无限等待处理IPC通信
servicemananger可以处理service组件注册请求和client组件使用服务请求,所以要不停的循环接受IPC请求
void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    unsigned readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
   
    readbuf[0] = BC_ENTER_LOOPER;
     //调用 binder_write
    binder_write(bs, readbuf, sizeof(unsigned));

    for (;;) {//无限循环处理
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;
        //再次调用ioctl,读取IPC数据
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
         ...
        //调用biner_parse解析读取的数据
        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
       ....
    }
}
binder_loop函数主要涉及到binder_write函数和无限循环,先看binder_write函数
1.binder_write函数 
int binder_write(struct binder_state *bs, void *data, unsigned len)
{
    //bs,data=readbuf[0]
    struct binder_write_read bwr;
    int res;
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (unsigned) data;
    bwr.read_size = 0;//write数据,这里read_size==0
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
     调用ioctl ,命令是BINDER_WRITE_READ 
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    ..
    return res;
}
有涉及到ioctl,调用binder_ioctl函数,只看处理BINDER_WRITE_READ 代码:
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
     ......
case BINDER_WRITE_READ: {//cmd=BINDER_WRITE_READ
                struct binder_write_read bwr;
               
                if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
                        ret = -EFAULT;
                        goto err;
                }
              
               //刚才设置的write_size>0,进入到这里
                if (bwr.write_size > 0) {
                        //调用了binder_thread_write函数
                        ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
                          ..
                }
                 //刚才设置的read_size=0,不会进入到这里
                if (bwr.read_size > 0) {
                    ...
                }
              
                if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
                        ret = -EFAULT;
                        goto err;
                }
                break;
        }
}
由于bwr中wirtesize>0,read_size=0,所以调用binder_thread_write函数:
 
  
 
  
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
                        void __user *buffer, int size, signed long *consumed)
{
        uint32_t cmd;
        void __user *ptr = buffer + *consumed;
        void __user *end = buffer + size;
        while (ptr < end && thread->return_error == BR_OK) {
                if (get_user(cmd, (uint32_t __user *)ptr))
                        return -EFAULT;
                ptr += sizeof(uint32_t);
                if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
                        binder_stats.bc[_IOC_NR(cmd)]++;
                        proc->stats.bc[_IOC_NR(cmd)]++;
                        thread->stats.bc[_IOC_NR(cmd)]++;
                }
                switch (cmd) {                
                  ....
                case BC_ENTER_LOOPER:
                     
                        if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
                                thread->looper |= BINDER_LOOPER_STATE_INVALID;//标记错误状态                               
                        }
                        thread->looper |= BINDER_LOOPER_STATE_ENTERED;///标记状态为BINDER_LOOPER_STATE_ENTERED
                        break;            
                  
                   ....              
        }
        return 0;
}
进入到BC_ENTER_LOOPER,将looper标记为BINDER_LOOPER_STATE_ENTERED,标志着当前状态为Binder Looper。
binder_thread_write函数执行完之后返回到用户空间的binder_write函数,在binder_write函数执行完之后进入无限循环之中:
for (;;) {//无限循环处理
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
//再次调用ioctl 
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
...
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
....
}
2.无限循环
进入无限循环之后,这是read_size>0但是write_size=0,又发送一个BINDER_WRITE_READ命令,继续进入binder_ioctl函数,直接进入BINDER_WRITE_READ分支:
 
 
 case BINDER_WRITE_READ: {
                struct binder_write_read bwr;
               ..
                if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
                        ret = -EFAULT;
                        goto err;
                }
               ...

                if (bwr.write_size > 0) {//write_size=0,条件不成立
                     ....
                }
                if (bwr.read_size > 0) {
                         //调用binder_thread_read函数
                        ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
                        if (!list_empty(&proc->todo))
                                wake_up_interruptible(&proc->wait);
                        if (ret < 0) {
                                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                                        ret = -EFAULT;
                                goto err;
                        }
                }
              
                if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
                        ret = -EFAULT;
                        goto err;
                }
                break;
        }
直接调用的binder_thread_read函数,该函数作用是读取IPC的数据。读取完IPC数据之后就调用biner_parse解析读取的数据
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uint32_t *ptr, uint32_t size, binder_handler func)
{
    int r = 1;
    uint32_t *end = ptr + (size / 4);

    while (ptr < end) {
        uint32_t cmd = *ptr++;
 
        switch(cmd) {
       .....
        case BR_TRANSACTION: {
            struct binder_txn *txn = (void *) ptr;
            if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
                LOGE("parse: txn too small!\n");
                return -1;
            }
            binder_dump_txn(txn);
            if (func) {//func为指定的处理函数,这里是svcmgr_handler 
                unsigned rdata[256/4];
                struct binder_io msg;
                struct binder_io reply;
                int res;

               //初始化msg和reply数据
                bio_init(&reply, rdata, sizeof(rdata), 4);
                bio_init_from_txn(&msg, txn);
               //调用处理函数func,这里就是svcmgr_handler函数来处理
                res = func(bs, txn, &msg, &reply);
                //将上述处理结果reply发送给binder
                binder_send_reply(bs, &reply, txn->data, res);
            }
            ptr += sizeof(*txn) / sizeof(uint32_t);
            break;
        }
        .....
        }
    }
    return r;
}
初始化msg和reply对象之后调用func,也就是svcmgr_handler函数来处理:
int svcmgr_handler(struct binder_state *bs,
                   struct binder_txn *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    unsigned len;
    void *ptr;
    uint32_t strict_policy;

//    LOGI("target=%p code=%d pid=%d uid=%d\n",
//         txn->target, txn->code, txn->sender_pid, txn->sender_euid);

    if (txn->target != svcmgr_handle)
        return -1;

    // Equivalent to Parcel::enforceInterface(), reading the RPC
    // header with the strict mode policy mask and the interface name.
    // Note that we ignore the strict_policy and don't propagate it
    // further (since we do no outbound RPCs anyway).
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
        fprintf(stderr,"invalid id %s\n", str8(s));
        return -1;
    }

    switch(txn->code) {//在添加或者查询service时会写入请求的命令
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);
          //检测service函数
        ptr = do_find_service(bs, s, len);
        if (!ptr)
            break;
        bio_put_ref(reply, ptr);
        return 0;

    case SVC_MGR_ADD_SERVICE:
        s = bio_get_string16(msg, &len);
        ptr = bio_get_ref(msg);
          //添加service函数
        if (do_add_service(bs, s, len, ptr, txn->sender_euid))
            return -1;
        break;

    case SVC_MGR_LIST_SERVICES: {
        unsigned n = bio_get_uint32(msg);

        si = svclist;
          //遍历service
        while ((n-- > 0) && si)
            si = si->next;
        if (si) {
            bio_put_string16(reply, si->name);
            return 0;
        }
        return -1;
    }
    default:
        LOGE("unknown code %d\n", txn->code);
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}
当client端调用getService和addService函数时对应其中的SVC_MGR_ADD_SERVICE和SVC_MGR_CHECK_SERVICE,分别执行
do_add_service和do_find_service函数
1 do_add_service函数
int do_add_service(struct binder_state *bs,
                   uint16_t *s, unsigned len,
                   void *ptr, unsigned uid)
{
    struct svcinfo *si;
     ..
    //先判断当前用户是否可以注册,权限检测
    if (!svc_can_register(uid, s)) {
        LOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",
             str8(s), ptr, uid);
        return -1;
    }

    si = find_svc(s, len);
    if (si) {//已经注册了
        if (si->ptr) {
         svcinfo_death(bs, si);
        }
        si->ptr = ptr;
    } else {
          //为新注册的服务分配内存
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {//OOM
            LOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",
                 str8(s), ptr, uid);
            return -1;
        }
        si->ptr = ptr;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = svcinfo_death;
        si->death.ptr = si;
        //新添加的服务增加到svclist中
        si->next = svclist;
        svclist = si;
    }

    binder_acquire(bs, ptr);
    binder_link_to_death(bs, ptr, &si->death);
    return 0;
}
其中权限检测的函数
int svc_can_register(unsigned uid, uint16_t *name)
{
    unsigned n;
    
    if ((uid == 0) || (uid == AID_SYSTEM))
        return 1;

    for (n = 0; n < sizeof(allowed) / sizeof(allowed[0]); n++)
        if ((uid == allowed[n].uid) && str16eq(name, allowed[n].name))
            return 1;
    return 0;
}
其中涉及到一个变量allowed:
static struct {
    unsigned uid;//user id
    const char *name;//user name
} allowed[] = {
#ifdef LVMX
    { AID_MEDIA, "com.lifevibes.mx.ipc" },
#endif
    { AID_MEDIA, "media.audio_flinger" },
    { AID_MEDIA, "media.player" },
    { AID_MEDIA, "media.camera" },
    { AID_MEDIA, "media.audio_policy" },
    { AID_DRM,   "drm.drmManager" },
    { AID_NFC,   "nfc" },
    { AID_RADIO, "radio.phone" },
    { AID_RADIO, "radio.sms" },
    { AID_RADIO, "radio.phonesubinfo" },
    { AID_RADIO, "radio.simphonebook" },
/* TODO: remove after phone services are updated: */
    { AID_RADIO, "phone" },
    { AID_RADIO, "sip" },
    { AID_RADIO, "isms" },
    { AID_RADIO, "iphonesubinfo" },
    { AID_RADIO, "simphonebook" }
}
可见不是说任何用户都可以添加服务的,只有在allowed数组中出现的才可以添加服务。添加完服务之后就需要看一下如何检索服务。

2 do_find_service函数 
void *do_find_service(struct binder_state *bs, uint16_t *s, unsigned len)
{
    struct svcinfo *si;
    si = find_svc(s, len);

    if (si && si->ptr) {
        return si->ptr;
    } else {
        return 0;
    }
}
其中调用了find_svc函数:
struct svcinfo *find_svc(uint16_t *s16, unsigned len)
{
    struct svcinfo *si;
    for (si = svclist; si; si = si->next) {
        if ((len == si->len) &&
            !memcmp(s16, si->name, len * sizeof(uint16_t))) {
            return si;
        }
    }
    return 0;
}
其实就是遍历svclist查询相应的服务。
到目前为止我们看的都是server端的操作,他可以响应添加service和检索service操作。那么谁会添加service呢?则个问题要搞清楚,是service自己添加到servicemanager,所以这个时候service是client

5.2 Service的启动和注册
这里以media服务的启动为例,它对应的main函数是:
int main(int argc, char** argv)
{
     //创建processstate对象,赋值给proc变量
    sp<ProcessState> proc(ProcessState::self());
     //获得servicemanager代理对象
    sp<IServiceManager> sm = defaultServiceManager();
    LOGI("ServiceManager: %p", sm.get());
    注册并运行下面四个服务
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();
    CameraService::instantiate();
    AudioPolicyService::instantiate();
    //创建一个线程池
    ProcessState::self()->startThreadPool();
     //加入到线程池中
    IPCThreadState::self()->joinThreadPool();
}
一共四个步骤:1创建processstate对象;2换取servicemanager代理;3注册服务;4开启线程池 
5.2.1创建processstate对象
调用了ProcessState中的self方法,Processstate类在frameworks/base/lib/binder/ProcessState.cpp文件中
sp<ProcessState> ProcessState::self()
{  
    //通过单例模式,一个进程只有一个ProcessState
    if (gProcess != NULL) return gProcess;    
    AutoMutex _l(gProcessMutex);
    if (gProcess == NULL) gProcess = new ProcessState;
    return gProcess;
}
如果为null,则构造一个ProcessState:
ProcessState::ProcessState()
    : mDriverFD(open_driver())//调研open_driver函数,将返回值赋给mDriverFD
    , mVMStart(MAP_FAILED)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
      
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        //然后调用mmap将binder映射到media service 进程之中
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
         ....
}
这个过程和servicemanager的binder_open函数功能类似。先看一个很重要的函数open_driver,它的返回值赋给mDriverFD
static int open_driver()
{ 
     //以读写的方式打开binder
    int fd = open("/dev/binder", O_RDWR);
    if (fd >= 0) {
        //如果当前进程执行exec系列函数时,关闭fd
        fcntl(fd, F_SETFD, FD_CLOEXEC);
        int vers;
        //发送BINDER_VERSION命令,查询版本号村version中
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        if (result == -1) {
            LOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
            close(fd);
            fd = -1;
        }
        //比较版本号是否一致
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
            LOGE("Binder driver protocol does not match user space protocol!");
            close(fd);
            fd = -1;
        }
        size_t maxThreads = 15;
        //通知binder驱动,设置当前server线程池最大15
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
        if (result == -1) {
            LOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        }
    } else {
        LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
    }
    return fd;
}
首先open打开binder,然后获取binder协议,设置server最大支持线程数。
这一部分的主要工作类似于ServiceManager中的binder_open函数,打开binder,读取或者设置一些值。

5.2.2 获取servicemanager代理
不论是添加服务还是查询服务,都要先获取servicemanager的代理对象,通过这个代理对象来实现与servicemanager通信,这个过程会涉及到一下几个层次:
Binder在Native框架层的实现_第2张图片
通过defaultServiceManager函数来实现:
1.Binder 通信接口:通信的实现,有IBinder、BBinder、BpBinder。其中BBinder和BpBinder都是IBinder的子类,BBinder是运行在service端的,相当于service的代理,而BpBinder则是在client端的代理。
2.Binder 服务接口:定义client端可以访问server端哪些服务,又IserviceManager来实现。这一部分功能随着server端提供的服务不同有所不同
3.Proxy 由BpInterface和BpServiceManager来实现。其中BpInterface继承BpRefBase,成员变量mRemote存储了client端的BpBinder对象,而BpServiceMananger实现IServiceMananger中声明的方法
4.Stub 由BnInterface和BnServiceManager来实现。没有使用

先看defaultServiceManager函数的作用就是获取ServiceManager的proxy对象,该函数位于frameworks/native/libs/binder/IServiceManager.cpp
sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
    
    {
          //也是单例模式获取gDefaultServiceManager  
        AutoMutex _l(gDefaultServiceManagerLock);
        if (gDefaultServiceManager == NULL) {
          先调用getContextObject,再将结果传递到interface_case函数中
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
        }
    }
    
    return gDefaultServiceManager;
}
生成 IServiceMananger对象的过程分成两部:1 ProcessState->getContextObject(null),结果返回一个BpBinder对象;2 将BpBinder对象通过interface_cast装换为BpServiceManager对象,一个一个的看。
1 getContextObject
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{  
    return getStrongProxyForHandle(0);
}
调用getStrongProxyForHandle,这里传递的参数是0
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
     //handle=0
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
      
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            // b=new BpBinder(0);
            b = new BpBinder(handle); 
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {            
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}
以为handle=0,实际上就是返回BpBinder(0),再看BpBinder的构造函数:
//handle=0
BpBinder::BpBinder(int32_t handle)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
{
    LOGV("Creating BpBinder %p handle %d\n", this, mHandle);

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    IPCThreadState::self()->incWeakHandle(handle);
}
调用了   IPCThreadState::self()->incWeakHandle(handle),先看 IPCThreadState::self()
IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS) {
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);//这里面有get,那么在哪里进行的set呢?就是在构造函数中
        if (st) return st;
        return new IPCThreadState;//创建一个IPCThreadState对象,保证当前进程只有一个IPCThreadState
    }
    
    if (gShutdown) return NULL;
    
    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS) {
        if (pthread_key_create(&gTLS, threadDestructor) != 0) {
            pthread_mutex_unlock(&gTLSMutex);
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}
这其中涉及到pthread_getspecific。在linux线程中,提供了一个当前进程可以访问,但是其他线程不可访问的变量,就是Thead Local Storage(线程局部存储,TLS)。通过pthread_getspecific和pthread_setspecific来实现,类似与hashmap的get和set键值对。可以看出self函数的作用就是返回唯一的一个IPCThreadState,看IPCThreadState的构造函数:
IPCThreadState::IPCThreadState()
    : mProcess(ProcessState::self()),//将ProcessState变量赋值为mProcess
      mMyThreadId(androidGetTid()),
      mStrictModePolicy(0),
      mLastTransactionBinderFlags(0)
{
    pthread_setspecific(gTLS, this);//设置gTLS为当前对象
    clearCaller();
    mIn.setDataCapacity(256);//mIn 、mOut 都是parcel类型,设置大小
    mOut.setDataCapacity(256);
}
创建完IPCThreadState之后,调用incWeakHandle函数:
void IPCThreadState::incWeakHandle(int32_t handle)//handle =0
{
     //向mOut写入BC_INCREFS 和0
    mOut.writeInt32(BC_INCREFS);
    mOut.writeInt32(handle);
}
getContextObject函数除了返回一个BpBinder对象之外没有和binder通信ProcessState::self()->getContextObject(NULL)等价于new BpBinder(0)

继续往下看:
2 interface_cast
interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));
等价于 interface_cast<IServiceManager>(new BpBinder(0)) 
interface_cast<IServiceManager>(obj)等价于IServiceManager::asInterface(obj)
所以最终等价于IServiceManager::asInterface(new BpBinder(0))
看IServiceManager的asInterface方法:
android::sp<IServiceManager>IServiceManager::asInterface(             
            const android::sp<android::IBinder>& obj)                 
    {                                                                 
        android::sp<IServiceManager> intr;                                 
        if (obj != NULL) {                                            
            intr = static_cast<IServiceManager>(                          
                obj->queryLocalInterface(  //BpBinder.qureyLocalInterface的返回值为null                              
                       IServiceManager::descriptor).get());               
            if (intr == NULL) { //执行这一句,实际是new BpServiceMananger(new BpBinder(0))                                        
                intr = new BpServiceManager(obj);                          
            }                                                            
        }                                                               
        return intr;                                                     
    } 
查询queryLocalInterface是否为null,obj类型为BpBinder其queryLocalInterface函数继承IBinder,在frameworks/native/libs/binder/Binder.cpp
sp<IInterface>  IBinder::queryLocalInterface(const String16& descriptor)
{
    return NULL;
}
直接返回空,那么创建一个BpServiceManager,该类在frameworks/native/libs/binder/IServiceManager.cpp文件中:
 BpServiceManager(const sp<IBinder>& impl)
        : BpInterface<IServiceManager>(impl)//impl =new BpBinder(0),调用父类BpInterface 否则函数
    {
    }
BpServiceManager的构造函数中调用了父类BpInterface的构造函数,该类在frameworks/native/libs/binder/IInterface.h文件中:
inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)
    : BpRefBase(remote)//调用了父类BpRefBase 构造函数
{
}
调用了父类BpRefBase 构造函数,该类在frameworks/native/libs/binder/Binder.h文件中
class BpRefBase : public virtual RefBase
{

    inline  IBinder*        remote()                { return mRemote; }
    inline  IBinder*        remote() const          { return mRemote; }

    private:
                            BpRefBase(const BpRefBase& o);
    BpRefBase&              operator=(const BpRefBase& o);
    IBinder* const          mRemote;
}
就是将BpBinder存入到BpRefBase的mRemote变量中,可以通过remote方法返回mRemote

5.2.3 注册服务
media中运行了四个服务:media.audio_flinger,media.player,media.camera,media.audio_policy,以AudioFlinger为例子。注册服务的过程中也是一个C/S的工程,也还是按照ServiceManager和Binder通信的过程,只不过这里服务变成了AudioFlinger,服务接口变成了IAudioFlinger,服务代理变成了BpAudioFlinger:
Binder在Native框架层的实现_第3张图片
AudioFlinger服务启动由instantiate方法完成,AudioFlinger的Instantiate方法来自父类的BinderService<AudioFlinger>中,BinderService位于frameworks/native/include/binder/BinderService.h文件中,该方法调用了publish方法:
 static status_t publish() {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(String16(SERVICE::getServiceName()), new SERVICE());
    }
defaultServiceManager函数返回的就是BpServiceManager,addService函数在frameworks/base/libs/binder中    
 virtual status_t addService(const String16& name, const sp<IBinder>& service)
    {
          //参数media.audio_flinger,new AudioFlinger

        Parcel data, reply;
          将service信息写入到data中
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        data.writeStrongBinder(service);
          //获取BpBinder对象来调用transact 方法,将结果返回值reply
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }
先将service信息写入到data中,然后调用BpBinder类型的transact方法来写入data并将结果返回给reply,再看BpBinder中的transact方法:
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}
调用的是IPCThreadState的transact方法:
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS; 
   
    if (err == NO_ERROR) {
          //先调用writeTransactionData方法处理数据     
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
   
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
   
    if ((flags & TF_ONE_WAY) == 0) {      
        if (reply) {
             //等待应答
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }       
    } else {
        err = waitForResponse(NULL, NULL);
    }
   
    return err;
}
请求应答的过程,先看writeTransactionData请求数据:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
     //参数BC_TRANSACTION,TF_ACCEPT_FDS, 0,ADD_SERVICE_TRANSACTION , data, NULL 
    binder_transaction_data tr;//Binder需要的数据

    tr.target.handle = handle;//0
    tr.code = code;//ADD_SERVICE_TRANSACTION  
    tr.flags = binderFlags;//TF_ACCEPT_FDS 
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
   
    const status_t err = data.errorCheck();//检测错误
    if (err == NO_ERROR) {//没有错误,设置tr的内容
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
       ...
    } else {
        return (mLastError = err);
    }
    //想mOut中写入BC_TRANSACTION 和tr
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
   
    return NO_ERROR;
}
对IPC请求数据的处理,向mOut中写入 BC_TRANSACTION 和binder_transaction_data,再看waitForResponse:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;//先调用talkWithDriver 
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;//有错误直接退出
        if (mIn.dataAvail() == 0) continue;//如果mIn中Binder Return字段为null,继续读取。
       
        cmd = mIn.readInt32();//读取是哪一种命令,这里是BC_TRANSACTION ,没有匹配的,进入default
      
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
          ..
       
        case BR_DEAD_REPLY:
          ..

        case BR_FAILED_REPLY:
          ..
       
        case BR_ACQUIRE_RESULT:
          ..
       
        case BR_REPLY:
          ..

        default:
            err = executeCommand(cmd);//执行executeCommand 
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
   
    return err;
}
先调用talkWithDriver:
//没有指定参数,这里doReceiver为true
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
        
    binder_write_read bwr;//就是指令携带的数据    
    //dataSize表示当前存储了多少个字节,dataPosition表示读取的位置,如果dataPosition>dataSize
     //说明已经处理完上次的数据
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();    
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;    
    bwr.write_size = outAvail;
    bwr.write_buffer = (long unsigned int)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();//设置读取信息大小地址
        bwr.read_buffer = (long unsigned int)mIn.data();
    } else {
        bwr.read_size = 0;
    }      
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    
#if defined(HAVE_ANDROID_OS)
       向binder_open发送BINDER_WRITE_READ命令  
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
       
    } while (err == -EINTR);
   
    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < (ssize_t)mOut.dataSize())//写入数据不完整,清空写入的数据
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);//写入完成,复位
        }
        if (bwr.read_consumed > 0) {//读取了数据
            mIn.setDataSize(bwr.read_consumed);//设置可读取数据大小
            mIn.setDataPosition(0);//读取位置从0开始
        }      
        return NO_ERROR;
    }    
    return err;
}
talkWithDriver与 binder进行通信,然后调用executeCommand(BC_TRANSACTION)
status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;
   
    switch (cmd) {
    case BR_TRANSACTION:
        {
            binder_transaction_data tr;
            result = mIn.read(&tr, sizeof(tr));            
            if (result != NO_ERROR) break;    ..           
            Parcel reply;          
            if (tr.target.ptr) {
                sp<BBinder> b((BBinder*)tr.cookie);
                //BBinder 的transact 方法
                const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);
                if (error < NO_ERROR) reply.setError(error);

            } else {
                const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
                if (error < NO_ERROR) reply.setError(error);
            }       
           
            if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                sendReply(reply, 0);
            } else {
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }         
        }
        break;
    
    default:
        printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
        result = UNKNOWN_ERROR;
        break;
    }  
    return result;
}

5.2.4 开启线程池 
通过以下代码实现:  
 //创建一个线程池
    ProcessState::self()->startThreadPool();
     //加入到线程池中
    IPCThreadState::self()->joinThreadPool();
1.startThreadPool
void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);//调用了spawnPooledThread 函数
    }
}
调用spawnPooledThread 函数:
void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        int32_t s = android_atomic_add(1, &mThreadPoolSeq);
        char buf[32];
       
        sp<Thread> t = new PoolThread(isMain);
        t->run(buf);
    }
}
创建PoolThread并执行run方法:
PoolThread内部调用了  IPCThreadState::self()->joinThreadPool(isMain)方法;所以ProcessState::startThreadPool()的作用就是开启一个新的线程并调用IPCThreadState::self()->joinThreadPool方法
2.joinThreadPool
void IPCThreadState::joinThreadPool(bool isMain)
{     
    //isMain =true,write   BC_ENTER_LOOPER  
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    
    androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_DEFAULT);
       
    status_t result;
    do {
        int32_t cmd;
       
        // When we've cleared the incoming command queue, process any pending derefs
        if (mIn.dataPosition() >= mIn.dataSize()) {
            size_t numPending = mPendingWeakDerefs.size();
            if (numPending > 0) {
                for (size_t i = 0; i < numPending; i++) {
                    RefBase::weakref_type* refs = mPendingWeakDerefs[i];
                    refs->decWeak(mProcess.get());
                }
                mPendingWeakDerefs.clear();
            }

            numPending = mPendingStrongDerefs.size();
            if (numPending > 0) {
                for (size_t i = 0; i < numPending; i++) {
                    BBinder* obj = mPendingStrongDerefs[i];
                    obj->decStrong(mProcess.get());
                }
                mPendingStrongDerefs.clear();
            }
        }

        // now get the next command to be processed, waiting if necessary
         //和binder进行通信
        result = talkWithDriver();
        if (result >= NO_ERROR) {
            size_t IN = mIn.dataAvail();
            if (IN < sizeof(int32_t)) continue;
            cmd = mIn.readInt32();           
               //执行回去的CMD命令
            result = executeCommand(cmd);
        }       
     
        androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_DEFAULT);

        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

  //退出时告诉Binder,BC_EXIT_LOOPER   
    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
}

5.3 client端使用代理
这里还以audio_flinger为例,client通过frameworks/av/media/libmedia/AudioSystem.cpp中的get_audio_flinger方法获取一个audio_flinger服务:
// establish binder interface to AudioFlinger service
const sp<IAudioFlinger>& AudioSystem::get_audio_flinger()
{
    Mutex::Autolock _l(gLock);
    if (gAudioFlinger.get() == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16("media.audio_flinger"));
            if (binder != 0)
                break;
          usleep(500000); // 0.5 s
        } while(true);
        if (gAudioFlingerClient == NULL) {
            gAudioFlingerClient = new AudioFlingerClient();
        } else {
            if (gAudioErrorCallback) {
                gAudioErrorCallback(NO_ERROR);
            }
         }
        binder->linkToDeath(gAudioFlingerClient);
        gAudioFlinger = interface_cast<IAudioFlinger>(binder);
        gAudioFlinger->registerClient(gAudioFlingerClient);
    }
        return gAudioFlinger;
}
defaultServiceManager返回了一个BpServiceManager,然后调用getService方法:   
  virtual sp<IBinder> getService(const String16& name) const
    {
        unsigned n;
        for (n = 0; n < 5; n++){
               //最多尝试5次,每次checkService 
            sp<IBinder> svc = checkService(name);
            if (svc != NULL) return svc;
            LOGI("Waiting for service %s...\n", String8(name).string());
            sleep(1);
        }
        return NULL;
    }
其中checkService函数:
virtual sp<IBinder> checkService( const String16& name) const
    {
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);//"media.audio_flinger"
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();
    }
remote()返回的是BpBinder(0),调用transact会继续调用IPCThreadState的transact方法,在talkWithDriver中将向Binder发送一个BC_TRANSACTION指令。Binder接受到这个指令之后就像servicemanager发送SVC_MGR_CHECK_SERVICE指令,然后执行binder_parse中的BR_TRANSACTION分支,最后进入到svcmgr_handler函数,进入到SVC_MGR_CHECK_SERVICE分支:
int svcmgr_handler(struct binder_state *bs,
                   struct binder_txn *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
     .....
 case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);
        ptr = do_find_service(bs, s, len);
        if (!ptr)
            break;
        bio_put_ref(reply, ptr);//将找到的service信息保存在reply中
        return 0;
}


在checkService的remote()->transact方法返回之后,调用reply.readStrongBinder(),此方法的调用interface_cast<IAudioFlinger>(binder)函数,其实就是IAudioFlinger::asInterface(new BpBinder)
android::sp<IAudioFlinger>IIAudioFlinger::asInterface(             
            const android::sp<android::IBinder>& obj)                 
    {                                                                 
        android::sp<IAudioFlinger> intr;                                 
        if (obj != NULL) {                                            
            intr = static_cast<IAudioFlinger>(                          
                obj->queryLocalInterface(  //BpBinder.qureyLocalInterface的返回值为null                              
                       IServiceManager::descriptor).get());               
            if (intr == NULL) { //执行这一句,实际是new BpIAudioFlinger(new BpBinder(0))                                        
                intr = new BpIAudioFlinger(obj);                          
            }                                                            
        }                                                               
        return intr;                                                     
    } 
最终getService的结果就是audioFlinger的proxy对象BpIAudioFlinger


5.4 服务代理和服务通信
在client获得完audioflinger的代理之后就可以使用它提供的接口来和service通信,流程如下:
Binder在Native框架层的实现_第4张图片
setMasterVolume为例,该函数在framews/base/media/libmedia/IAudioFlinger.cpp文件中:    
 virtual status_t setMasterVolume(float value)
    {
        Parcel data, reply;
        data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor());
        data.writeFloat(value);
        remote()->transact(SET_MASTER_VOLUME, data, &reply);
        return reply.readInt32();
    }
remote()方法返回的是BpBinder,BpBinder继续调用IPCThreadState中的transact方法,meidaserver中运行两个binder线程用于处理binder通信,每一个都可以talkWithDriver等待client请求,有数据请求就会执行executeCommand  
sp<BBinder> b((BBinder*)tr.cookie);
                //BBinder 的transact 方法
const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);调用BBinder的transact方法:
status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }
    if (reply != NULL) {
        reply->setDataPosition(0);
    }
    return err;
}
code如果没有符合的,则onTransact方法,由子类来处理
status_t AudioFlinger::onTransact(
        uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    return BnAudioFlinger::onTransact(code, data, reply, flags);
}
调用BnAudioFlinger的onTransact函数,在IAudioFlinger.cpp文件中:
status_t BnAudioFlinger::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
     switch(code) {
     case SET_MASTER_VOLUME: {
            CHECK_INTERFACE(IAudioFlinger, data, reply);
            reply->writeInt32( setMasterVolume(data.readFloat()) );
            return NO_ERROR;
        } break;
     case XXXX
     break;
      ....
     }
}
onTransact方法中定义了所有client请求的Server端处理函数,只要将请求码发送到binder驱动就可以转发到server端,再根据请求码在onTransact中处理。






































你可能感兴趣的:(Binder在Native框架层的实现)