ReadWriteLock实战

文章功能用户点击量统计,相信很多人会有接触到,那么如果避免数据库频繁的去做记录数+1的操作呢

下面介绍一种比较容易实现的方法,就是使用ReadWriteLock 读写分离并发工具

一切尽在代码中,代码如下:

精髓代码就是:

 private ReadWriteLock lock = new ReentrantReadWriteLock(); //互斥锁

//读锁
try{
  lock.readLock().lock();
  ...// 省略的代码
}finally{
  lock.readLock().unlock();
}

//写锁
try{
  lock.writeLock().lock();
  ...// 省略的代码
}finally{
  lock.writeLock().unlock();
}
import org.apache.commons.lang3.RandomStringUtils;

import java.util.Map;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

/**
 * Created by cailu on 2018/6/11.
 */
public class ReadWriteLockTest{

    public static void main(String[] args) throws InterruptedException {

        CountDownLatch start = new CountDownLatch(1);
        CountDownLatch stop = new CountDownLatch(1);

        for (int i = 0; i < 2; i++) {
            Thread thread = new Thread(new RunMethod(start, stop));
            thread.start();
        }

        start.countDown();//启动
        stop.await();//等待所有结束
        System.out.println("全部运行完毕!!!");

        Thread.sleep(5000);//主要为了等最终的入库结果

        ConcurrentHashMap concurrentHashMap = Counter.getInst().getTotalCountMap();
        int totalCount = 0;
        System.out.println("=====入库数据汇总=====");
        for (Map.Entry entry : concurrentHashMap.entrySet()) {
            System.out.println("入库数据汇总|" + entry.getKey() + "|" + entry.getValue());
            totalCount += entry.getValue();
        }
        System.out.println("入库数据汇总|总和|" + totalCount);
    }

    static class RunMethod implements Runnable{

        private CountDownLatch start;
        private CountDownLatch stop;

        public RunMethod(CountDownLatch start, CountDownLatch stop) {
            this.start = start;
            this.stop = stop;
        }

        public void run() {
            try {
                start.await();
                for (int i = 0; i < 10000; i++) {
                    Counter.getInst().addClick(RandomStringUtils.random(1, "AB"));
                }
            } catch (InterruptedException e) {
                e.printStackTrace();
            } finally {
                stop.countDown();
            }
        }
    }

    static class Counter{

        private ConcurrentHashMap map = new ConcurrentHashMap();
        private ReadWriteLock lock = new ReentrantReadWriteLock(); //互斥锁

        static class CounterHolder{
            public static final Counter instance = new Counter();
        }

        public static Counter getInst(){
            return CounterHolder.instance;
        }

        private Counter(){
            //启动定时刷新数据库任务
            //每20毫秒钟进行入库
            ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
            executorService.scheduleAtFixedRate(new Runnable() {
                public void run() {
                    ConcurrentHashMap tmpMap = null;
                    try {
                        lock.writeLock().lock();
                        tmpMap = map;
                        map = new ConcurrentHashMap();
                    } finally {
                        lock.writeLock().unlock();
                    }

                    if (!tmpMap.isEmpty()) {
                        System.out.println("==================入库开始==================");
                        for (Map.Entry entry : tmpMap.entrySet()) {
                            String key = entry.getKey();
                            int count = entry.getValue().get();
                            System.out.println("入库数据|" + key + "|" + count);

                            Integer tempCount = totalCountMap.get(key);
                            if (tempCount == null) {
                                totalCountMap.put(key, count);
                            } else {
                                totalCountMap.put(key, count + tempCount);
                            }
                        }
                        System.out.println("==================入库结束==================");
                    }
                }
            }, 0, 20, TimeUnit.MILLISECONDS);
        }

        //当用户点击了一次,就对数据点击量加一
        public void addClick(String id) {
            try{
                lock.readLock().lock();
                AtomicInteger atomicIntegerNew = new AtomicInteger(0);
                AtomicInteger atomicInteger = map.putIfAbsent(id, atomicIntegerNew);
                if(atomicInteger == null){//说明是新增
                    atomicInteger = atomicIntegerNew;
                }
                atomicInteger.addAndGet(1);
            }finally{
                lock.readLock().unlock();
            }
        }


        //额外记录一下总数,看看是否正确
        private ConcurrentHashMap totalCountMap = new ConcurrentHashMap();

        public ConcurrentHashMap getTotalCountMap() {
            return totalCountMap;
        }
    }

}

测试结果如下:
==================入库开始==================
入库数据|A|3583
入库数据|B|3659
==================入库结束==================
全部运行完毕!!!
==================入库开始==================
入库数据|A|6374
入库数据|B|6384
==================入库结束==================
=====入库数据汇总=====
入库数据汇总|A|9957
入库数据汇总|B|10043
入库数据汇总|总和|20000

测试结果说明
我们开启了2个线程,分别访问10000次,得到的入库数据汇总总和也是20000,说明在并发环境数据统计是正确的。
现在,就开启你自己的ReadWriteLock旅程吧

你可能感兴趣的:(ReadWriteLock实战)