如果当前运行的线程少于 corePoolSize
,则创建新线程来执行任务(注意,执行 这一步骤需要获取全局锁)。
如果运行的线程等于或多于 corePoolSize
,则将任务加入 BlockingQueue
。
如果无法将任务加入 BlockingQueue
(队列已满),则创建新的线程来处理任务 (注意,执行这一步骤需要获取全局锁)。
如果创建新线程将使当前运行的线程超出 maximumPoolSize
,任务将被拒绝并 调用 RejectedExecutionHandler.rejectedExecution()
方法。
ArrayBlockingQueue
:有界阻塞队列LinkedBlockingQueue
:无界任务队列PriorityBlockingQueue
:一个支持优先级排序的无界阻塞队列。SynchronousQueue
:一个不存储元素的阻塞队列。ThreadPoolExecutor.AbortPolicy
中止策略:线程池默认的拒绝策略就是中止策略。中止策略会在执行器添加任务时抛出一个RejectedExecutionException
运行时异常。ThreadPoolExecutor.DiscardPolicy
丢弃策略:丢弃策略会在提交任务失败时默默地把任务丢弃掉,失败就失败,完全不管它。ThreadPoolExecutor.DiscardOldestPolicy
丢弃最老任务策略:丢弃最老任务策略,就是移除任务队列的队头元素,然后提交新的任务ThreadPoolExecutor.CallerRunsPolicy
调用者执行策略:调用者执行策略,当线程池线程数满时,它不再丢给线程池执行,也不丢弃掉,而是自己线程来执行,把异步任务变成同步任务。package util.thread.threadpool;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;
/**
* @author LiDong
* @version 1.0.0
* @createTime 03/20/2023 08:24 AM
*/
@SuppressWarnings("all")
public class CustomThreadPool {
private static volatile ThreadPoolExecutor threadPool;
/**
* 核心线程数
*/
public static final int CORE_POOL_SIZE = Runtime.getRuntime().availableProcessors() + 1;
/**
* 最大线程数
*/
public static final int MAX_POOL_SIZE = Runtime.getRuntime().availableProcessors() << 1;
/**
* 当线程空闲时,保持活跃的时间 1000 毫秒 1s
*/
public static final int KEEP_ALIVE_TIME = 1000;
/**
* 阻塞队列大小
*/
public static final int BLOCK_QUEUE_SIZE = 1000;
private CustomThreadPool() {
}
/**
* execute runnable
*
* @param runnable runnable
*/
public static void executor(Runnable runnable) {
getThreadPoolExecutor().execute(runnable);
}
/**
* execute callable
*
* @param callable callable
*/
public static <T> Future<T> submit(Callable<T> callable) {
return getThreadPoolExecutor().submit(callable);
}
/**
* 获取线程池对象
*
* @return ThreadPoolExecutor
*/
public static ThreadPoolExecutor getThreadPoolExecutor() {
if (threadPool == null) {
synchronized (CustomThreadPool.class) {
if (threadPool == null) {
threadPool = new ThreadPoolExecutor(CORE_POOL_SIZE,
MAX_POOL_SIZE,
KEEP_ALIVE_TIME,
TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(BLOCK_QUEUE_SIZE),
new CustomThreadPoolFactory("自定义线程池"),
new ThreadPoolExecutor.AbortPolicy());
}
}
}
return threadPool;
}
/**
* 自定义线程池工厂
*/
public static class CustomThreadPoolFactory implements ThreadFactory {
private final AtomicInteger poolNumber = new AtomicInteger(1);
private final AtomicInteger threadNumber = new AtomicInteger(1);
private String namePrefix;
private final ThreadGroup group;
public CustomThreadPoolFactory(String name) {
this.namePrefix = namePrefix = name + "-" + poolNumber.getAndIncrement() + "-thread-";
SecurityManager s = System.getSecurityManager();
group = (s != null) ? s.getThreadGroup() : Thread.currentThread().getThreadGroup();
}
@Override
public Thread newThread(Runnable r) {
Thread t = new Thread(group, r, namePrefix + threadNumber.getAndIncrement(), 0);
if (t.isDaemon()) {
t.setDaemon(false);
}
if (t.getPriority() != Thread.NORM_PRIORITY) {
t.setPriority(Thread.NORM_PRIORITY);
}
return t;
}
}
}
JDk1.5
之后,新增的原子操作类提供了一种用法简单、性能高效、线程安全地更新一个变量的方式,这些类同样位于JUC
包下的atomic
包下AtomicBoolean
、AtomicInteger
、AtomicLong
@Test
public void test1() throws InterruptedException {
AtomicInteger num = new AtomicInteger(0);
for (int i = 0; i < 5; i++) {
ThreadPoolUtils.executor(() -> {
for (int j = 0; j < 10; j++) {
dealMethodOne(num);
}
});
}
Thread.sleep(3000);
logger.info(String.valueOf(num.get()));
}
private void dealMethodOne(AtomicInteger num) {
int i = num.incrementAndGet();
logger.info("Current thread {} i value {}", Thread.currentThread().getName(), i);
}
AtomicIntegerArray
、AtomicLongArray
、AtomicReferenceArray
/**
* 原子更新数组
*/
@Test
public void test1() {
int[] arr = {3, 2};
AtomicIntegerArray atomicIntegerArray = new AtomicIntegerArray(arr);
logger.info(String.valueOf(atomicIntegerArray.addAndGet(1, 8)));
int i = atomicIntegerArray.accumulateAndGet(0, 2, (left, right) ->
left * right / 3
);
logger.info(String.valueOf(i));
}
atomic
也提供了相关的类:AtomicReference
:原子更新引用类型;AtomicReferenceFieldUpdater
:原子更新引用类型里的字段;AtomicMarkableReference
:原子更新带有标记位的引用类型atomic
同样也提供了相应的原子操作类:AtomicIntegeFieldUpdater
:原子更新整型字段类;AtomicLongFieldUpdater
:原子更新长整型字段类;AtomicStampedReference
:原子更新引用类型,这种更新方式会带有版本号。而为什么在更新的时候会带有版本号,是为了解决CAS
的ABA
问题;newUpdater
来创建一个更新器,并且需要设置想要更新的类和属性;更新类的属性必须使用public volatile
进行修饰CountDownLatch
的作用就是等待其他的线程都执行完任务,必要时可以对各个任务的执行结果进行汇总,然后主线程才继续往下执行。countDown()
和await()
:countDown()
方法用于使计数器减一,其一般是执行任务的线程调用,await()
方法则使调用该方法的线程处于等待状态,其一般是主线程调用。这里需要注意的是,countDown()
方法并没有规定一个线程只能调用一次,当同一个线程调用多次countDown()
方法时,每次都会使计数器减一;另外,await()
方法也并没有规定只能有一个线程执行该方法,如果多个线程同时执行await()
方法,那么这几个线程都将处于等待状态,并且以共享模式享有同一个锁。public class CountDownLatchTest {
private static final int COUNT_DOWN_LATCH_NUM = 12;
private static final Logger logger = LoggerFactory.getLogger(CountDownLatchTest.class);
/**
* @param args args
*/
public static void main(String[] args) {
CountDownLatch countDownLatch = new CountDownLatch(COUNT_DOWN_LATCH_NUM);
try {
for (int i = 0; i < COUNT_DOWN_LATCH_NUM; i++) {
CountDownLatchTask countDownLatchTask = new CountDownLatchTask(countDownLatch);
new Thread(countDownLatchTask).start();
}
countDownLatch.await();
logger.info("主线程开始...");
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
Thread.currentThread().interrupt();
}
}
/**
* CountDownLatch 任务
*/
private static class CountDownLatchTask implements Runnable {
private static final Logger log = LoggerFactory.getLogger(CountDownLatchTask.class);
private final CountDownLatch countDownLatch;
CountDownLatchTask(CountDownLatch aCountDownLatch) {
countDownLatch = aCountDownLatch;
}
@Override
public void run() {
try {
// TODO 在这里处理逻辑
} catch (Exception e) {
log.error(e.getMessage(), e);
} finally {
countDownLatch.countDown();
log.info("线程计数器的个数为:{}", countDownLatch.getCount());
}
}
}
}
让一组线程达到一个屏障(同步点)时被阻塞,知道最后一个线程到达屏障时,屏障才会开门,所有被拦截的线程才会继续运行
CountDownLatch
的计数器只能使用一次,而CyclicBarrier
的计数器可以使用reset()
方法重置,可以使用多次,所以CyclicBarrier
能够处理更为复杂的场景;
CyclicBarrier
还提供了一些其他有用的方法,比如getNumberWaiting()
方法可以获得CyclicBarrier
阻塞的线程数量,isBroken()
方法用来了解阻塞的线程是否被中断;
CountDownLatch
允许一个或多个线程等待一组事件的产生,而CyclicBarrier
用于等待其他线程运行到栅栏位置。
public class CyclicBarrierTest {
private static final Logger logger = LoggerFactory.getLogger(CyclicBarrierTest.class);
/**
* CyclicBarrier 默认的构造方法是 CyclicBarrier(int parties),其参数表示屏障拦截的线程数量,
* 每个线程使用 await() 方法告诉 CyclicBarrier 我已经到达了屏障,然后当前线程被阻塞。
*
* @param args args
*/
public static void main(String[] args) {
CyclicBarrier cyclicBarrier = new CyclicBarrier(2);
CyclicBarrierTask cyclicBarrierTask = new CyclicBarrierTask(cyclicBarrier);
new Thread(cyclicBarrierTask).start();
try {
cyclicBarrier.await();
logger.info("ok");
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
private static class CyclicBarrierTask implements Runnable {
private final CyclicBarrier cyclicBarrier;
private CyclicBarrierTask(CyclicBarrier aCyclicBarrier) {
cyclicBarrier = aCyclicBarrier;
}
@Override
public void run() {
logger.info("CyclicBarrierTask start...");
try {
cyclicBarrier.await();
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
}
}
Semaphore
(信号量):是一种计数器,用来保护一个或者多个共享资源的访问。如果线程要访问一个资源就必须先获得信号量。如果信号量内部计数器大于0
,信号量减1
,然后允许共享这个资源;否则,如果信号量的计数器等于0
,信号量将会把线程置入休眠直至计数器大于0
.当信号量使用完时,必须释放。public class SemaphoreTest {
private static final Logger logger = LoggerFactory.getLogger(SemaphoreTest.class);
private static ExecutorService executorService = Executors.newCachedThreadPool();
/**
* 对资源的访问做控制 只能使用指定的资源,当资源被释放后才能继续使用资源
*
* @param args args
*/
public static void main(String[] args) {
Semaphore semaphore = new Semaphore(2);
for (int i = 0; i < 10; i++) {
int index = i;
executorService.execute(() -> {
try {
semaphore.acquire();
logger.info("线程:{}获得许可:{}", Thread.currentThread().getName(), index);
TimeUnit.SECONDS.sleep(1);
semaphore.release();
logger.info("允许TASK个数:{}", semaphore.availablePermits());
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
});
}
executorService.shutdown();
}
}
public class ExchangerTest {
private static final Logger logger = LoggerFactory.getLogger(ExchangerTest.class);
private static final Exchanger<String> exchanger = new Exchanger<>();
private static final ExecutorService threadPool = Executors.newFixedThreadPool(2);
/**
* @param args args
*/
public static void main(String[] args) {
threadPool.execute(() -> {
try {
String A = "1234";
exchanger.exchange(A);
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
});
threadPool.execute(() -> {
try {
String B = "5678";
String A = exchanger.exchange("X");
logger.info("A和B数据是否一致:{}", A.equals(B));
logger.info("A= {}", A);
logger.info("B= {}", B);
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
});
threadPool.shutdown();
logger.info("主线程..");
}
}
Runnable
接口或Callable
接口Executor
,以及继承自Executor
的ExecutorService
接口。Executor
框架有两个关键类实现了ExecutorService
接口,ThreadPoolExecutor
和ScheduledThreadPoolExecutor
Future
和实现Future
接口的FutureTask
类。Executor
框架最核心的类是 ThreadPoolExecutor
,它是线程池的实现类,主要由下列4
个组件构成。corePool
:核心线程池的大小。maximumPool
:最大线程池的大小。BlockingQueue
:用来暂时保存任务的工作队列。RejectedExecutionHandler
:当 ThreadPoolExecutor
已经关闭或 ThreadPoolExecutor
已经饱和时(达到了最大线程池大小且工作队列已满),execute()
方法将要调用的 Handler
。public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(),
threadFactory);
}
FixedThreadPool
的 corePoolSize
和 maximumPoolSize
都被设置为创建FixedThreadPool
时指定的参数 nThreads
。SingleThreadExecutor
的 corePoolSize
和 maximumPoolSize
被设置为 1。其他参数与FixedThreadPool
相同。public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(),
threadFactory));
}
CachedThreadPool
的 corePoolSize
被设置为 0
,即 corePool
为空;maximumPoolSize
被设置为 Integer.MAX_VALUE
,即 maximumPool
是无界的。这里把 keepAliveTime
设置为 60L
,意味着 CachedThreadPool
中的空闲线程等待新任务的最长时间为 60
秒,空闲线程超过 60
秒后将会被终止public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(),
threadFactory);
}
CachedThreadPool
会因为创建过多线程而耗尽 CPU
和内存资源。ScheduledThreadPoolExecutor
继承自 ThreadPoolExecutor
。它主要用来在给定的延迟之后运行任务,或者定期执行任务。ScheduledThreadPoolExecutor
的功能与 Timer
类似,但 ScheduledThreadPoolExecutor
功能更强大、更灵活。Timer
对应的是单个后台线程,而ScheduledThreadPoolExecutor
可以在构造函数中指定多个对应的后台线程数public ScheduledThreadPoolExecutor(int corePoolSize) {
super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
new DelayedWorkQueue());
}
ScheduledFuture
,该ScheduledFuture
在指定的延迟后启用,任务立即提交给线程池,线程池安排线程在指定时间后正式开始运作,运作以后保持正常节奏@Test
public void test2() throws InterruptedException {
logger.info("准备执行任务");
List<String> taskList = Lists.newArrayList("1号", "2号", "5号", "7号");
Queue<String> queue = new ConcurrentLinkedDeque<>(taskList);
ScheduledExecutorService pool = Executors.newScheduledThreadPool(1);
int size = queue.size();
for (int i = 0; i < size; i++) {
ScheduledFuture<String> future = pool.schedule(() -> {
logger.info("{} {}当前执行的任务是{}", Thread.currentThread().getName(), System.currentTimeMillis(), queue.poll());
TimeUnit.SECONDS.sleep(2);
return "callSchedule";
}, 5, TimeUnit.SECONDS);
}
Thread.sleep(50000);
}
public static void useScheduledThreadPool() {
ScheduledExecutorService executor = Executors.newScheduledThreadPool(5);
executor.scheduleAtFixedRate(() -> {
long start = System.currentTimeMillis();
logger.info("scheduleAtFixedRate 开始执行时间:{}", DateFormat.getTimeInstance().format(new Date()));
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
long end = System.currentTimeMillis();
logger.info("scheduleAtFixedRate 执行花费时间:{} s", (end - start) / 1000);
logger.info("scheduleAtFixedRate 执行完成时间:{}", DateFormat.getTimeInstance().format(new Date()));
logger.info("======================================");
}, 1, 5, TimeUnit.SECONDS);
}
@Test
public void test() throws InterruptedException {
useScheduledThreadPool();
Thread.sleep(20000);
}
@Test
public void test3() throws InterruptedException {
logger.info("准备执行任务");
List<String> taskList = Lists.newArrayList("1号", "2号", "5号", "7号", "9号", "10号");
Queue<String> queue = new ConcurrentLinkedDeque<>(taskList);
ScheduledExecutorService pool = Executors.newScheduledThreadPool(1);
pool.scheduleWithFixedDelay(() -> {
logger.info("{} 当前执行的任务是{}", Thread.currentThread().getName(), queue.poll());
try {
TimeUnit.SECONDS.sleep(6);
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
}, 3, 5, TimeUnit.SECONDS);
Thread.sleep(50000);
}
Future
接口和实现 Future
接口的 FutureTask
类,代表异步计算的结果。
FutureTask
除了实现 Future
接口外,还实现了 Runnable
接口。因此,FutureTask
可 以交给 Executor
执行,也可以由调用线程直接执行FutureTask.run()
。根据FutureTask.run()
方法被执行的时机,FutureTask
可以处于下面 3
种状态。
FutureTask.run()
方法还没有被执行之前,FutureTask
处于未启动状态。 当创建一个 FutureTask
,且没有执行 FutureTask.run()
方法之前,这个 FutureTask
处于未启动状态。FutureTask.run()
方法被执行的过程中,FutureTask
处于已启动状态。FutureTask.run()
方法执行完后正常结束。public class FutureTaskTest {
private static final Logger logger = LoggerFactory.getLogger(FutureTaskTest.class);
private final Map<Object, Future<String>> taskCache = new ConcurrentHashMap<>();
private String executionTask(String taskName) throws InterruptedException {
while (true) {
Future<String> future = taskCache.get(taskName);
if (future == null) {
Callable<String> task = new Callable<String>() {
public String call() {
return taskName;
}
};
FutureTask<String> futureTask = new FutureTask<>(task);
future = taskCache.putIfAbsent(taskName, futureTask);
if (future == null) {
future = futureTask;
// 执行任务
futureTask.run();
}
}
try {
return future.get();
} catch (Exception e) {
taskCache.remove(taskName, future);
}
}
}
@Test
public void test1() throws ExecutionException, InterruptedException {
for (int i = 0; i < 100; i++) {
int finalI = i;
new Thread(() -> {
try {
executionTask("task" + finalI);
logger.info("Thread " + finalI + "... running");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}).start();
}
Thread.sleep(7000);
List<String> list = new ArrayList<>();
for (Map.Entry<Object, Future<String>> entry : taskCache.entrySet()) {
Future<String> future = entry.getValue();
list.add(future.get());
}
logger.info("Taskcache size:{}", taskCache.size());
logger.info("list :{}", list);
}
}