Bulk download papers from scihub for text mining

https://janikvonrotz.ch/2020/05/07/bulk-download-papers-from-scihub-for-text-mining/

Last evening I wanted to play some games with my brother. Instead we ended up writing a simple script to bulk download papers from scihub. He studies bioinformatics and is currently doing some meta research in his field. By crawling a publication database by specific keywords he got list of papers which need to be analyzed. However, most of the paper are hidden behind pay walls. Luckily there’s scihub. The most hated and beloved platform to get your hands on almost any scientific paper. He asked me wether I could help him building script that downloads papers from scihub based on a list of dois.

Sure I said, opended the browser and had a look at the requests that need to be performed to download a single paper.

When submitting the doi the following request is sent.

image

The http code 302 means that there is a redirection happening. If we want automate this process following redirects must be enabled.

image

The redirected response contains the paper. The url to the pdf file can be extracted using regex.

The initial request can be copied as curl command. This makes writing a script and bypassing any agent-checks much easier.

image

We both started Visual Studio Code and connected via the live coding plugin. My brother sent me a excerpt of the doi list:

doi-list.txt

10.3390/ijms21062239
10.1007/s12094-019-02276-8
10.31782/IJCRR.2019.11242
10.1016/j.chom.2019.05.007
10.3390/v11070638
10.1128/IAI.00733-18
10.1186/s12861-019-0183-y
10.1155/2019/8472712
10.1002/stem.2852
10.3390/genes9040176
10.1016/j.neulet.2018.01.040
10.1093/molehr/gax070
10.15252/emmm.201708213
10.1007/s40139-017-0137-7
10.1111/cas.13155
10.4103/0366-6999.191782
10.1126/science.aaf5211

And together we built this bash simple script:

scihub-download.sh

urlencode() {
    # urlencode 
    old_lc_collate=$LC_COLLATE
    LC_COLLATE=C

    local length="${#1}"
    for (( i = 0; i < length; i++ )); do
        local c="${1:i:1}"
        case $c in
            [a-zA-Z0-9.~_-]) printf "$c" ;;
            *) printf '%%%02X' "'$c" ;;
        esac
    done

    LC_COLLATE=$old_lc_collate
}

readarray -t list 

It reads the text file and sends a request to scihub for each entry. If the response contains a valid pdf link, the file is downloaded. The doi request parameter must be url encoded.

In the next step he will process the content of the papers and analyze them for specific

你可能感兴趣的:(Bulk download papers from scihub for text mining)