Newer
Older
# phap - Phage Host Analysis Pipeline
A snakemake workflow that wraps various phage-host prediction tools.
* Uses
[Singularity](https://sylabs.io/) containers for execution of all tools.
When possible (i.e. the built image is not larger than a few `G`s),
tools **and** their dependencies are bundled in the same container. This means
tou do not need to worry about getting models or any other external databases.
[RaFAh](https://sourceforge.net/projects/rafah/)|[Coutinho F. H. et al. 2020](https://www.biorxiv.org/content/10.1101/2020.09.25.313155v1?rss=1)
[vHuLK](https://github.com/LaboratorioBioinformatica/vHULK)|[Amgarten D. et al., 2020](https://www.biorxiv.org/content/10.1101/2020.12.06.413476v1)
[VirHostMatcher-Net](https://github.com/WeiliWw/VirHostMatcher-Net)|[Wang W. et al., 2020](https://doi.org/10.1093/nargab/lqaa044])
[WIsH](https://github.com/soedinglab/WIsH)|[Galiez G. et al., 2017](https://academic.oup.com/bioinformatics/article/33/19/3113/3964377)
## Installation
### Dependencies
To run the workflow your will need
- `snakemake > 5.x` (developed with `5.30.1`)
- `singularity >= 3.6` (developed with `3.6.3`)
- `biopython >= 1.78` (developed with `1.78`)
### Conda environemnt
It is recommended to use a
[conda environment](https://docs.conda.io/projects/conda/en/latest/).
The file `environment.txt` can be used to recreate the complete environment
used during development.
> The provided `environment.txt` contains an explicit list of all packages,
> produced with `conda list -n hp --explicit > environment.txt` .
> This ensures all packages are exactly the same versions/builds, so we
> minimize the risk of running into dependencies issues
To get a working environment
```
$ git clone https://git.science.uu.nl/papanikos/phap.git
$ cd phap
# Note the long notation --file flag; -f will not work.
# Activate it - use the name you gave above, if it is different
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
5.30.1
```
## Configuration
### Input data
The tools wrapped in this workflow expect phage sequences as input.
You should try to make sure that the input sequences you want to analyze
correspond to phage genomes/contigs (or at least viruses).
You can probably input any valid fasta file but the
[GIGO concept](https://en.wikipedia.org/wiki/Garbage_in,_garbage_out)
is probably applicable.
A separate workflow to identify phage/viral genomes/contigs is
[What the Phage](https://github.com/replikation/What_the_Phage).
The current workflow can handle multiple samples.
For each sample, **all viral contigs to be analyzed should be provided as a
single multifasta** (can be `gz`ipped).
A mapping between sample ids and their corresponding fasta file is provided as
a samplesheet (see below).
### Sample sheet
You must define a samplesheet with two comma (`,`) separated columns and the
header `sample,fasta`. Values from the `sample` column must be unique and
are used as sample identifiers. Their corresponding `fasta` values must be
valid paths to multifasta files with the phage sequences for that sample.
An example
```
$ cat samples.csv
sample,fasta
s01,/path/to/s01.fna
s02,/path/to/another.fna.gz
```
> Note
> There is no need to follow any convention for the fasta file name to
> reflect the sample id. The values in the sample column are the ones to worry
> about, as these are the ones used as wildcards within the Snakefile.
You can
- Fill in the location of the samplesheet within the `config.yml`.
- Drop the file in the workdir - **Attention**: It should be named `samples.csv`
- Use `snakemake`'s `--config samplesheet=/path/to/my_samples.csv` when
executing the wofkflow.
For these tools there is no need to pre-download and setup anything - all
data and software dependencies required for running them are bundled within
* VirHostMatcher-Net, WIsH
Databases and models need to be downloaded from the VirHostMatcher data repo
([see here](https://github.com/WeiliWw/VirHostMatcher-Net#downloading)).
WIsH models for the 62,493 host genomes used in their paper are also provided
and are used here for WIsH predictions.
### Singularity containers
Definition files, along with documentation of how to use them to build
the containers are in [resources/singularity](./resources/singularity).
The pre-built containers are all available through the
[standard singularity library](https://cloud.sylabs.io/library/papanikos_182)
## Usage
Basic:
```
# From within this directory
# Make sure you have defined a samplesheet
--singularity-args "-B /path/to/databases/:/data"
where `/path/to/database/` is the directory containing tables, WIsH models and
CRISPR blasts databases
> Note
>
> Binding the dir like this is required if the files are stored in some
> shared location and not on the local filesystem.
## Output
All output is stored under a `results` directory within the main workdir.
Results are stored per sample according to the sample ids you provided in the
sample sheet.
For each sample, results for each tool are stored in directories named after
the tool. An example looks like this:
```
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
results/A/
├── all_predictions.tsv
├── rafah
│ ├── A_CDS.faa
│ ├── A_CDS.fna
│ ├── A_CDS.gff
│ ├── A_CDSxMMSeqs_Clusters
│ ├── A_Genomes.fasta
│ ├── A_Genome_to_Domain_Score_Min_Score_50-Max_evalue_1e-05.tsv
│ ├── A_Ranger_Model_3_Predictions.tsv
│ ├── A_Seq_Info.tsv
│ └── predictions.tsv
├── tmp
│ ├── genomes
│ └── reflist.txt
├── vhmnet
│ ├── feature_values
│ ├── predictions
│ ├── predictions.tsv
│ └── tmp
├── vhulk
│ ├── predictions.tsv
│ └── results
└── wish
├── llikelihood.matrix
├── prediction.list
└── predictions.tsv
```
### Per sample
* `all_predictions.tsv`: Contains the best prediction per contig (rows) for
each tool along with its confidence/p-value/whatever single value each tool
uses to evaluate its confidence in the prediction.
contig vhulk_pred vhulk_score rafah_pred rafah_score vhmnet_pred vhmnet_score wish_pred wish_score
NC_005964.2 None 4.068828 Mycoplasma 0.461 Mycoplasma fermentans 0.9953 Bacteria;Tenericutes;Mollicutes;Mycoplasmatales;Mycoplasmataceae;Mycoplasma;Mycoplasma fermentans;Mycoplasma fermentans MF-I2 -1.2085700000000001
NC_015271.1 Escherichia_coli 1.0301523 Salmonella 0.495 Muricauda pacifica 0.9968 Bacteria;Proteobacteria;Gammaproteobacteria;Enterobacterales;Enterobacteriaceae;Raoultella;Raoultella sp. NCTC 9187;Raoultella sp. NCTC 9187 -1.3869200000000002
NC_023719.1 Bacillus 0.0012575098 Bacillus 0.55 Clostridium sp. LS 1.0000 Bacteria;Firmicutes;Clostridia;Clostridiales;Clostridiaceae;Clostridium;Clostridium beijerinckii;Clostridium beijerinckii -1.29454
```
* `tmp` directory
* Contains one fasta file per input genome, along with other intermediate
files necessary for a smooth execution of the workflow.
* `rafah`
* All files prefixed with `<sample_id>_` are the rafah's raw output
* `predictions.tsv`: A selection of the 1st (`Contig`) , 6th
(`Predicted_Host`) and 7th (`Predicted_Host_Score`) columns from file
`<sample_id>_Seq_Info.tsv`
* `vhulk`
* `results.csv`: Copy of the `results/sample/tmp/genomes/results/results.csv`
* `predictions.tsv`: A selection of the 1st (`BIN/genome`), 10th (`final_prediction`)
11th (`entropy`) columns from file `results.csv`.
* `vhmnet`
* Directories `feature_values` and `predictions` are the raw output
* Directory `tmp` is a temporary dir written by `VirHostMatcher-Net` for
doing its magic.
* `predictions.tsv` contain contig, host taxonomy and scores.
* `wish`
* Files `llikelihood.matrix` and `prediction.list` are the raw output
* File `predictions.tsv` has contig, host taxonomy and **llikelihood** scores.
Logs capturing stdout and stderr during execution of each rule can be found in