macでインフォマティクス

macでインフォマティクス

HTS (NGS) 関連のインフォマティクス情報についてまとめています。

MAGから汚染配列を除去するためのマルチモーダル深層言語モデル Deepurify

 

 メタゲノムアセンブリゲノム(MAG)は、メタゲノムシーケンスデータを用いた微生物のダークマター探索に貴重な知見を提供する。しかし、MAG内の汚染が下流解析の結果に重大な影響を及ぼす可能性に対する懸念が高まっている。現在のMAG除染ツールは主にマーカー遺伝子に依存しており、ゲノム配列の文脈情報を十分に活用していない。この制限を克服するため、本著者らはMAG除染のためのDeepurifyを導入する。Deepurifyは対照学習を用いたマルチモーダル深層言語モデルを活用し、微生物ゲノム配列をその分類学的系統と照合する。MAG内のコンティグをMAG分離ツリーに割り当て、ツリー探索アルゴリズムを適用してMAGを高品質・中品質サブMAGの最大化を目的として分割する。本研究では、Deepurifyがシミュレーションデータ、CAMIデータセット、および複雑度が異なる実世界データセットにおいて、MDMclearerおよびMAGpurifyを上回る性能を示したことを明らかにした。Deepurifyは、土壌(20.0%)、海洋(45.1%)、植物(45.5%)、淡水(33.8%)、ヒト糞便メタゲノムシーケンスデータセット(28.5%)において、高品質MAGの数を増加させた。

 

インストール

最小ハードウェア要件は以下のように記載されている(著者らはDGX Serverを使ってずっとリッチな条件で使用している)。ここではwindows11のWSL(ubuntu22)でテストした。インストールはかなり重いので注意すること。特にdeepurify-conda-env.ymlからの依存導入が重い。

最小ハードウェア条件

  • System: Linux
  • CPU: No restriction.
  • RAM: >= 32 GB
  • GPU: The GPU memory must be equal to or greater than 6 GB. (5273MB)

依存プログラム

  • Prodigal v 2.6.3 (ORF/CDS-prediction)
  • HMMER v.3.3.2 (Detecting conserved single-copy marker genes)
  • CheckM2 v 1.0.1 (Evaluate the quality of MAGs)
  • Galah v0.4.1 (Filter replicated MAGs)
  • CONCOCT v1.1.0 (Binner)
  • MetaBAT2 v2.15 (Binner)
  • Semibin2 v2.1.0 (Binner)
  • PyTorch v2.1.0

本体 Github

git clone https://github.com/zoubohao/Deepurify.git
cd Deepurify/
mamba create -n deepurify38 python=3.8
conda activate deepurify38
#本体
pip install Deepurify==2.4.3

#依存1
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
#依存2
mamba env update --file deepurify-conda-env.yml

#重ければ、ymlの代わりに以下を実行
conda activate deepurify38
conda config --env --add channels anaconda
conda config --env --add channels conda-forge
conda config --env --add channels bioconda
conda config --env --add channels defaults
#念のため1個ずつ導入
mamba install concoct=1.1.0 -y
mamba install metabat2=2.15 -y
mamba install prodigal=2.6.3 -y
mamba install hmmer=3.3.2 -y
mamba install tensorflow=2.12 -y
mamba install lightgbm -y
mamba install diamond=2.0.4 -y
mamba install numpy=1.23.5 -y
mamba install scipy=1.8.0 -y
mamba install pandas=1.4.0 -y
mamba install scikit-learn=0.23.2 -y
mamba install galah=0.4.1 -y #重い => cargoでビルドも可能
mamba install libopenblas=0.3.25 -y
mamba install gsl=2.7.0 -y #重い => pip installで代替化
mamba install fastANI -y #重い => pip installで代替化
mamba install tqdm -y
mamba install bedtools -y

> deepurify -h

Deepurify version: *** v2.4.3 ***

usage: deepurify [-h] {clean,iter-clean} ...

 

Deepurify is a tool to improving the quality of MAGs.

 

positional arguments:

  {clean,iter-clean}

    clean             The **CLEAN** mode. Only clean the MAGs in the input folder.

    iter-clean        The **iter-clean** mode. Binning the contigs and cleaning the MAGs with applying the iter-clean strategy. This mode can ensemble (or apply single binner) the binning results from different binners. Make sure there is no space in the contigs' names.

 

optional arguments:

  -h, --help          show this help message and exit

(deepurify38) kazu@kazu:~$ deepurify clean -h

Deepurify version: *** v2.4.3 ***

usage: deepurify clean [-h] -i INPUT_PATH -o OUTPUT_PATH --bin_suffix BIN_SUFFIX [-db DB_FOLDER_PATH] [--gpu_num GPU_NUM] [--cuda_device_list CUDA_DEVICE_LIST [CUDA_DEVICE_LIST ...]] [--batch_size_per_gpu BATCH_SIZE_PER_GPU] [--each_gpu_threads EACH_GPU_THREADS] [--overlapping_ratio OVERLAPPING_RATIO]

                       [--cut_seq_length CUT_SEQ_LENGTH] [--mag_length_threshold MAG_LENGTH_THRESHOLD] [--num_process NUM_PROCESS] [--topk_or_greedy_search {topk,greedy}] [--topK_num TOPK_NUM] [--temp_output_folder TEMP_OUTPUT_FOLDER]

 

optional arguments:

  -h, --help            show this help message and exit

  -i INPUT_PATH, --input_path INPUT_PATH

                        The folder of input MAGs.

  -o OUTPUT_PATH, --output_path OUTPUT_PATH

                        The folder used to output cleaned MAGs.

  --bin_suffix BIN_SUFFIX

                        The bin suffix of MAG files.

  -db DB_FOLDER_PATH, --db_folder_path DB_FOLDER_PATH

                        The path of database folder. By default, if no path is provided (i.e., set to None), it is expected that the environment variable 'DeepurifyInfoFiles' has been set to point to the appropriate folder. Please ensure that the 'DeepurifyInfoFiles' environment variable is correctly

                        configured if the path is not explicitly provided.

  --gpu_num GPU_NUM     The number of GPUs to be used can be specified. Defaults to 1. If you set it to 0, the code will utilize the CPU. However, please note that using the CPU can result in significantly slower processing speed. It is recommended to provide at least one GPU (>= GTX-1060-6GB) for

                        accelerating the speed.

  --cuda_device_list CUDA_DEVICE_LIST [CUDA_DEVICE_LIST ...]

                        The gpu id that you want to apply. You can set '0 1' to use gpu0 and gpu1. The code would auto apply GPUs if it is None. Default to None.

  --batch_size_per_gpu BATCH_SIZE_PER_GPU

                        The batch size per GPU determines the number of sequences that will be loaded onto each GPU. This parameter is only applicable if the --gpu_num option is set to a value greater than 0. The default value is 4, meaning that one sequences will be loaded per GPU batch. The batch size for

                        CPU is 4.

  --each_gpu_threads EACH_GPU_THREADS

                        The number of threads per GPU (or CPU) determines the parallelism level during contigs' inference stage. If the value of --gpu_num is greater than 0, each GPU will have a set number of threads to do inference. Similarly, if --gpu_num is set to 0 and the code will run on CPU, the

                        specified number of threads will be used. By default, the number of threads per GPU or CPU is set to 1. The --batch_size_per_gpu value will be divided by the number of threads to determine the batch size per thread.

  --overlapping_ratio OVERLAPPING_RATIO

                        The --overlapping_ratio is a parameter used when the length of a contig exceeds the specified --cut_seq_length. By default, the overlapping ratio is set to 0.5. This means that when a contig is longer than the --cut_seq_length, it will be split into overlapping subsequences with 0.5

                        overlap between consecutive subsequences.

  --cut_seq_length CUT_SEQ_LENGTH

                        The --cut_seq_length parameter determines the length at which a contig will be cut if its length exceeds this value. The default setting is 8192, which is also the maximum length allowed during training. If a contig's length surpasses this threshold, it will be divided into smaller

                        subsequences with lengths equal to or less than the cut_seq_length.

  --mag_length_threshold MAG_LENGTH_THRESHOLD

                        The threshold for the total length of a MAG's contigs is used to filter generated MAGs after applying single-copy genes (SCGs). The default threshold is set to 200,000, which represents the total length of the contigs in base pairs (bp). MAGs with a total contig length equal to or

                        greater than this threshold will be considered for further analysis or inclusion, while MAGs with a total contig length below the threshold may be filtered out.

  --num_process NUM_PROCESS

                        The maximum number of threads will be used. All CPUs will be used if it is None. Defaults to None

  --topk_or_greedy_search {topk,greedy}

                        Topk searching or greedy searching to label a contig. Defaults to "topk".

  --topK_num TOPK_NUM   During the top-k searching approach, the default behavior is to search for the top-k nodes that exhibit the highest cosine similarity with the contig's encoded vector. By default, the value of k is set to 3, meaning that the three most similar nodes in terms of cosine similarity will

                        be considered for labeling the contig. Please note that this parameter does not have any effect when using the greedy search approach (topK_num=1). Defaults to 3.

  --temp_output_folder TEMP_OUTPUT_FOLDER

                        The temporary files generated during the process can be stored in a specified folder path. By default, if no path is provided (i.e., set to None), the temporary files will be stored in the parent folder of the '--input_path' location. However, you have the option to specify a

                        different folder path to store these temporary files if needed.

(deepurify38) kazu@kazu:~$ deepurify iter-clean -h

Deepurify version: *** v2.4.3 ***

usage: deepurify iter-clean [-h] -c CONTIGS_PATH -b SORTED_BAM_PATH -o OUTPUT_PATH [-db DB_FOLDER_PATH] [--binning_mode BINNING_MODE] [--gpu_num GPU_NUM] [--cuda_device_list CUDA_DEVICE_LIST [CUDA_DEVICE_LIST ...]] [--batch_size_per_gpu BATCH_SIZE_PER_GPU] [--each_gpu_threads EACH_GPU_THREADS]

                            [--overlapping_ratio OVERLAPPING_RATIO] [--cut_seq_length CUT_SEQ_LENGTH] [--mag_length_threshold MAG_LENGTH_THRESHOLD] [--num_process NUM_PROCESS] [--topk_or_greedy_search {topk,greedy}] [--topK_num TOPK_NUM] [--temp_output_folder TEMP_OUTPUT_FOLDER]

 

optional arguments:

  -h, --help            show this help message and exit

  -c CONTIGS_PATH, --contigs_path CONTIGS_PATH

                        The contigs fasta path.

  -b SORTED_BAM_PATH, --sorted_bam_path SORTED_BAM_PATH

                        The sorted bam path.

  -o OUTPUT_PATH, --output_path OUTPUT_PATH

                        The folder used to output cleaned MAGs.

  -db DB_FOLDER_PATH, --db_folder_path DB_FOLDER_PATH

                        The path of database folder. By default, if no path is provided (i.e., set to None), it is expected that the environment variable 'DeepurifyInfoFiles' has been set to point to the appropriate folder. Please ensure that the 'DeepurifyInfoFiles' environment variable is correctly

                        configured if the path is not explicitly provided.

  --binning_mode BINNING_MODE

                        The semibin2, concoct, metabat2 will all be run if this parameter is None. The other modes are: 'semibin2', 'concoct', and 'metabat2'. Defaults to None.

  --gpu_num GPU_NUM     The number of GPUs to be used can be specified. Defaults to 1. If you set it to 0, the code will utilize the CPU. However, please note that using the CPU can result in significantly slower processing speed. It is recommended to provide at least one GPU (>= GTX-1060-6GB) for

                        accelerating the speed.

  --cuda_device_list CUDA_DEVICE_LIST [CUDA_DEVICE_LIST ...]

                        The gpu id that you want to apply. You can set '0 1' to use gpu0 and gpu1. The code would auto apply GPUs if it is None. Default to None.

  --batch_size_per_gpu BATCH_SIZE_PER_GPU

                        The batch size per GPU determines the number of sequences that will be loaded onto each GPU. This parameter is only applicable if the --gpu_num option is set to a value greater than 0. The default value is 4, meaning that one sequences will be loaded per GPU batch. The batch size for

                        CPU is 4.

  --each_gpu_threads EACH_GPU_THREADS

                        The number of threads per GPU (or CPU) determines the parallelism level during contigs' inference stage. If the value of --gpu_num is greater than 0, each GPU will have a set number of threads to do inference. Similarly, if --gpu_num is set to 0 and the code will run on CPU, the

                        specified number of threads will be used. By default, the number of threads per GPU or CPU is set to 1. The --batch_size_per_gpu value will be divided by the number of threads to determine the batch size per thread.

  --overlapping_ratio OVERLAPPING_RATIO

                        The --overlapping_ratio is a parameter used when the length of a contig exceeds the specified --cut_seq_length. By default, the overlapping ratio is set to 0.5. This means that when a contig is longer than the --cut_seq_length, it will be split into overlapping subsequences with 0.5

                        overlap between consecutive subsequences.

  --cut_seq_length CUT_SEQ_LENGTH

                        The --cut_seq_length parameter determines the length at which a contig will be cut if its length exceeds this value. The default setting is 8192, which is also the maximum length allowed during training. If a contig's length surpasses this threshold, it will be divided into smaller

                        subsequences with lengths equal to or less than the cut_seq_length.

  --mag_length_threshold MAG_LENGTH_THRESHOLD

                        The threshold for the total length of a MAG's contigs is used to filter generated MAGs after applying single-copy genes (SCGs). The default threshold is set to 200,000, which represents the total length of the contigs in base pairs (bp). MAGs with a total contig length equal to or

                        greater than this threshold will be considered for further analysis or inclusion, while MAGs with a total contig length below the threshold may be filtered out.

  --num_process NUM_PROCESS

                        The maximum number of threads will be used. All CPUs will be used if it is None. Defaults to None

  --topk_or_greedy_search {topk,greedy}

                        Topk searching or greedy searching to label a contig. Defaults to "topk".

  --topK_num TOPK_NUM   During the top-k searching approach, the default behavior is to search for the top-k nodes that exhibit the highest cosine similarity with the contig's encoded vector. By default, the value of k is set to 3, meaning that the three most similar nodes in terms of cosine similarity will

                        be considered for labeling the contig. Please note that this parameter does not have any effect when using the greedy search approach (topK_num=1). Defaults to 3.

  --temp_output_folder TEMP_OUTPUT_FOLDER

                        The temporary files generated during the process can be stored in a specified folder path. By default, if no path is provided (i.e., set to None), the temporary files will be stored in the parent folder of the '--input_path' location. However, you have the option to specify a

                        different folder path to store these temporary files if needed.

 

 

データベースの準備

レポジトリのリンクからmodel weightsファイルをダウンロードして解凍しパスを通す(2.3GB)。

export DeepurifyInfoFiles=/path/to/Deepurify-DB/Deepurify-DB/

解凍したディレクトリの1つ下のDeepurify-DB/Deepurify-DB/を指定するように注意する。

 

実行方法

1、clean モード

汚染を除きたいmagのfastaファイル(binning済みのfastaなど)のディレクトリを指定する。-bin_suffix fastaで拡張子fastaを認識する。

deepurify clean  -i mag_dir/ -o outdir --bin_suffix fasta --gpu_num 2 --cuda_device_list 1 2

#利用できる場合、複数GPU指定
deepurify iter-clean  -c ./contigs.fasta -o ./output_folder/ -s ./sorted.bam -db /path/of/this/Deepurify-DB/ --gpu_num 2 --cuda_device_list 1 2

出力例

汚染が除去されたfastaファイルが出力される。テストでは6個のraw binned fastaをクリーニングしたところ、8個のbinが出力された。一分のcontig群が別の genome 由来と判断されたからと考えられる。

 

MetaInfo.tsv

 

2、iter-clean モード

こちらのモードでは、初期binning処理(metabat2, SemiBin2, CONCOCT)のあと、マッピング情報と深層モデルで反復処理しながら高品質のbinの構築とクリーニングを行う(レポジトリの図を参照)。そのため、指定するのはMAGのfastaファイルとなる。

deepurify iter-clean -c ./contigs.fasta -o outdir -s ./sorted.bam -db /path/of/this/Deepurify-DB/ --gpu_num 2 --cuda_device_list 1 2

当然cleanモードより重いので注意する。マルチGPU構成の強力な計算機構成だと複数GPUを指定することで高速化できる。

 

コメント

checkM2が動作しないと途中のstepでエラーになるので、最初にcheckM2のhelpが出るかどうか確認するようにしてください。

引用

A multi-modal deep language model for contaminant removal from metagenome-assembled genomes

Bohao Zou, Jingjing Wang, Yi Ding, Zhenmiao Zhang, Yufen Huang, Xiaodong Fang, Ka Chun Cheung, Simon See & Lu Zhang 
Nature Machine Intelligence volume 6, pages1245–1255 (2024)

 

関連