vaex 将csv转换为hdf5

I have a massive CSV file which I can not fit all into memory at one time. How do I convert it to HDF5?

We are working to make this process an easy one liner. In the meantime, consider this strategy: read the CSV file in chunks, and use vaex to export each chunk to disk. Since all resulting HDF5 files will have the same structure, one can use vaex.open(part*) to open all chunks as a single DataFrame. For a small performance improvement, that DataFrame can be exported to disk in a single large HDF5 file.

Consider the following code example:

for i, chunk in enumerate(vaex.read_csv('/path/to/data/BigData.csv', chunksize=100_000)):
    df_chunk = vaex.from_pandas(chunk, copy_index=False)
    export_path = f'/path/to/data/part_{i}.hdf5'
    df_chunk.export_hdf5(export_path)

df = vaex.open('/path/to/data/part*')
df.export_hdf5('/path/to/data/Final.hdf5')

https://www.leiphone.com/news/201912/pW63YGX6lJapjyf9.html
https://vaex.readthedocs.io/en/latest/faq.html#I-have-a-massive-CSV-file-which-I-can-not-fit-all-into-memory-at-one-time.-How-do-I-convert-it-to-HDF5?

你可能感兴趣的:(科学计算)