在一个业务中,需要将数据库的一张日志表导出到excel中做统计分析归类,由于单表的数据量特别大,发现在最终导出excel的时候,由于数量太大,导出速度特别慢,想了一些办法,不管使用何种API,单线程始终是操作的瓶颈,因此最终考虑使用多线程进行改善
1、数据总量分段
2、每个线程处理不同分段的数据
3、提交线程池
下面来看具体的代码,为测试方便,这里直接使用一个测试接口进行调用,
1、控制器
/**
* 导出系统日志信息-V2测试
*
* @return
*/
@GetMapping("/log-export/v2")
@ApiOperation(value = "导出系统日志信息V2", notes = "导出系统日志信息V2", produces = "application/json")
public void exportSysLogV2(@RequestParam(name = "userName", required = false) String userName,
@RequestParam(name = "startDate", required = false) String startDate,
@RequestParam(name = "endDate", required = false) String endDate,
@RequestParam(name = "type", required = false) String type,
HttpServletResponse response) {
operLogService.exportSysLogV2(userName, startDate, endDate, type,response);
}
2、业务实现类
@Override
public void exportSysLogV2(String userName, String startDate, String endDate, String type,HttpServletResponse response) {
List exportLists = workflowTaskMapper.getOperlogList(userName, startDate, endDate);
List handleLists = handleDbLists(exportLists);
List
里面的入参可以根据实际的业务需求进行使用,
3、MultiWrite方法
import java.util.List;
import java.util.Map;
import java.util.concurrent.Executor;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
public class MultiWrite {
public static void exec(int max,int threadMax,List
在本方法中,使用到了几个工具类,下面直接列举出来,
3.1 XSSFWorkbookWrapper
/**
* 创建初始化信息
*/
public class XSSFWorkbookWrapper {
private XSSFWorkbook workbook;
private XSSFSheet sheetAt;
public XSSFWorkbookWrapper() {
workbook = new XSSFWorkbook();
sheetAt = workbook.createSheet("main");
}
//初始化表头
public void initTitile(String type) {
XSSFRow row = sheetAt.createRow(0);
AtomicInteger index = new AtomicInteger(0);
String[] head = null;
if(StringUtils.isNotBlank(type)){
head = new TitlesColumns().getInitTitles().get(type);
}
for(int i=0; i
3.2 WriteDataUtils 封装查询结果,获取分组数据
public class WriteDataUtils {
public static List> getValuesOut(List
3.4 WritePOIUtils 将数据写入到具体的excel中
import org.apache.poi.xssf.usermodel.XSSFCell;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.List;
public class WritePOIUtils {
private static Object LOCK = new Object();
public static synchronized XSSFRow getRow(XSSFSheet sheetAt,Integer i) {
return sheetAt.getRow(i) == null ? sheetAt.createRow(i) : sheetAt.getRow(i);
}
public static XSSFCell getCell(XSSFRow row,Integer j) {
return row.getCell(j) == null ? row.createCell(j) : row.getCell(j);
}
public void setWorkbookData(XSSFWorkbook workbook,
List> data,
Integer startNum) {
XSSFSheet sheetAt = workbook.getSheetAt(0);
Integer endNum = data.size() + startNum;
Integer index = 0;
for (int i = startNum; i < endNum; i++) {
XSSFRow row = getRow(sheetAt,i);
List values = data.get(index);
for (int j = 0; j < values.size(); j++) {
XSSFCell cell = getCell(row,j);
/*cell.setCellValue(test.get(j));*/
String s = values.get(j);
synchronized (LOCK) {
cell.setCellValue(s);
}
}
index++;
}
}
public static void writeFile (XSSFWorkbook workbook) throws IOException {
FileOutputStream out = null;
try {
out = new FileOutputStream("C:\\logs\\multi.xlsx");
//向d://test.xls中写数据
out.flush();
workbook.write(out);
out.close();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (out != null) {
out.close();
}
}
}
}
3.5 TitlesColumns 表头初始化工具类
import java.util.HashMap;
import java.util.Map;
public class TitlesColumns {
public static final String[] EXPORT_TITLES = {
"操作人员", "登录IP", "模块标题", "业务类型", "操作状态", "操作时间"
};
public static final String[] EXPORT_COLNUMS = {
"operName", "operIp", "title", "businessType", "status", "operTime"
};
public static Map initTitles = new HashMap<>() ;
public static Map initColumns = new HashMap<>();
public TitlesColumns(){
initTitles = new HashMap<>();
initTitles.put("001",EXPORT_TITLES);
initColumns.put("001_V",EXPORT_COLNUMS);
}
public Map getInitTitles() {
return initTitles;
}
public Map getInitColumns() {
return initColumns;
}
}
启动程序,通过接口调用一下,观察执行效果,可以看到,5000多条数据,大概花了2秒的时间全部导出来
下面仍然使用这个程序,我们使用单线程的方式做一下导出,看一下效果,
4.1 修改业务实现类
@Override
public void exportSysLogV2(String userName, String startDate, String endDate, String type,HttpServletResponse response) {
List exportLists = workflowTaskMapper.getOperlogList(userName, startDate, endDate);
List handleLists = handleDbLists(exportLists);
List
本方法中要使用到的工具类直接列出来
import com.google.common.base.Charsets;
import com.sx.common.utils.StringUtils;
import org.apache.poi.ss.usermodel.*;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import javax.servlet.http.HttpServletResponse;
import java.beans.BeanInfo;
import java.beans.Introspector;
import java.beans.PropertyDescriptor;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.OutputStream;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class ExcelLocalUtils {
public static void exportExcel(XSSFWorkbook xb, List
由于这里数据量还不算大,时间相差不算明显,我本地测试了5万多条数据的一张表,结果相差了5秒多,可见一斑,本篇主要讲述了一下使用多线程导出excel多数据的案例,希望对看到的伙伴有用,最后感谢观看!