Explore Workflows

View already parsed workflows here or click here to add your own

Graph Name Retrieved From View
workflow graph Trim Galore ChIP-Seq pipeline single-read

. This ChIP-Seq pipeline is based on the original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **ChIP-Seq** basic analysis workflow for a **single-read** experiment with Trim Galore. ### Data Analysis SciDAP starts from the .fastq files which most DNA cores and commercial NGS companies return. Starting from raw data allows us to ensure that all experiments have been processed in the same way and simplifies the deposition of data to GEO upon publication. The data can be uploaded from users computer, downloaded directly from an ftp server of the core facility by providing a URL or from GEO by providing SRA accession number. Our current pipelines include the following steps: 1. Trimming the adapters with TrimGalore. This step is particularly important when the reads are long and the fragments are short-resulting in sequencing adapters at the end of read. If adapter is not removed the read will not map. TrimGalore can recognize standard adapters, such as Illumina or Nexterra/Tn5 adapters. 2. QC 3. (Optional) trimming adapters on 5' or 3' end by the specified number of bases. 4. Mapping reads with BowTie. Only uniquely mapped reads with less than 3 mismatches are used in the downstream analysis. Results are saved as a .bam file. 5. (Optional) Removal of duplicates (reads/pairs of reads mapping to exactly same location). This step is used to remove reads overamplified in PCR. Unfortunately, it may also remove \"good\" reads. We usually do not remove duplicates unless the library is heavily duplicated. Please note that MACS2 will remove 'excessive' duplicates during peak calling ina smart way (those not supported by other nearby reads). 6. Peakcalling by MACS2. (Optionally), it is possible to specify read extension length for MACS2 to use if the length determined automatically is wrong. 7. Generation of BigWig coverage files for display on the browser. The coverage shows the number of fragments at each base in the genome normalized to the number of millions of mapped reads. In the case of PE sequencing the fragments are real, but in the case of single reads the fragments are estimated by extending reads to the average fragment length found by MACS2 or specified by the user in 6. ### Details _Trim Galore_ is a wrapper around [Cutadapt](https://github.com/marcelm/cutadapt) and [FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) to consistently apply adapter and quality trimming to FastQ files, with extra functionality for RRBS data. In outputs it returns coordinate sorted BAM file alongside with index BAI file, quality statistics of the input FASTQ file, reads coverage in a form of BigWig file, peaks calling data in a form of narrowPeak or broadPeak files, islands with the assigned nearest genes and region type, data for average tag density plot (on the base of BAM file). Workflow starts with step *fastx\_quality\_stats* from FASTX-Toolkit to calculate quality statistics for input FASTQ file. At the same time `bowtie` is used to align reads from input FASTQ file to reference genome *bowtie\_aligner*. The output of this step is unsorted SAM file which is being sorted and indexed by `samtools sort` and `samtools index` *samtools\_sort\_index*. Based on workflow’s input parameters indexed and sorted BAM file can be processed by `samtools markdup` *samtools\_remove\_duplicates* to get rid of duplicated reads. Right after that `macs2 callpeak` performs peak calling *macs2\_callpeak*. On the base of returned outputs the next step *macs2\_island\_count* calculates the number of islands and estimated fragment size. If the last one is less that 80bp (hardcoded in the workflow) `macs2 callpeak` is rerun again with forced fixed fragment size value (*macs2\_callpeak\_forced*). If at the very beginning it was set in workflow input parameters to force run peak calling with fixed fragment size, this step is skipped and the original peak calling results are saved. In the next step workflow again calculates the number of islands and estimates fragment size (*macs2\_island\_count\_forced*) for the data obtained from *macs2\_callpeak\_forced* step. If the last one was skipped the results from *macs2\_island\_count\_forced* step are equal to the ones obtained from *macs2\_island\_count* step. Next step (*macs2\_stat*) is used to define which of the islands and estimated fragment size should be used in workflow output: either from *macs2\_island\_count* step or from *macs2\_island\_count\_forced* step. If input trigger of this step is set to True it means that *macs2\_callpeak\_forced* step was run and it returned different from *macs2\_callpeak* step results, so *macs2\_stat* step should return [fragments\_new, fragments\_old, islands\_new], if trigger is False the step returns [fragments\_old, fragments\_old, islands\_old], where sufix \"old\" defines results obtained from *macs2\_island\_count* step and sufix \"new\" - from *macs2\_island\_count\_forced* step. The following two steps (*bamtools\_stats* and *bam\_to\_bigwig*) are used to calculate coverage on the base of input BAM file and save it in BigWig format. For that purpose bamtools stats returns the number of mapped reads number which is then used as scaling factor by bedtools genomecov when it performs coverage calculation and saves it in BED format. The last one is then being sorted and converted to BigWig format by bedGraphToBigWig tool from UCSC utilities. Step *get\_stat* is used to return a text file with statistics in a form of [TOTAL, ALIGNED, SUPRESSED, USED] reads count. Step *island\_intersect* assigns genes and regions to the islands obtained from *macs2\_callpeak\_forced*. Step *average\_tag\_density* is used to calculate data for average tag density plot on the base of BAM file.

https://github.com/datirium/workflows.git

Path: workflows/trim-chipseq-se.cwl

Branch/Commit ID: fa4f172486288a1a9d23864f1d6962d85a453e16

workflow graph step-valuefrom2-wf_v1_0.cwl

https://github.com/common-workflow-language/cwl-utils.git

Path: testdata/step-valuefrom2-wf_v1_0.cwl

Branch/Commit ID: 8058c7477097f90205dd7d8481781eb3737ea9c9

workflow graph Variant calling germline paired-end

A workflow for the Broad Institute's best practices gatk4 germline variant calling pipeline. ## __Outputs__ #### Primary Output files: - bqsr2_indels.vcf, filtered and recalibrated indels (IGV browser) - bqsr2_snps.vcf, filtered and recalibrated snps (IGV browser) - bqsr2_snps.ann.vcf, filtered and recalibrated snps with effect annotations #### Secondary Output files: - sorted_dedup_reads.bam, sorted deduplicated alignments (IGV browser) - raw_indels.vcf, first pass indel calls - raw_snps.vcf, first pass snp calls #### Reports: - overview.md (input list, alignment metrics, variant counts) - insert_size_histogram.pdf - recalibration_plots.pdf - snpEff_summary.html ## __Inputs__ #### General Info - Sample short name/Alias: unique name for sample - Experimental condition: condition, variable, etc name (e.g. \"control\" or \"20C 60min\") - Cells: name of cells used for the sample - Catalog No.: vender catalog number if available - BWA index: BWA index sample that contains reference genome FASTA with associated indices. - SNPEFF database: Name of SNPEFF database to use for SNP effect annotation. - Read 1 file: First FASTQ file (generally contains \"R1\" in the filename) - Read 2 file: Paired FASTQ file (generally contains \"R2\" in the filename) #### Advanced - Ploidy: number of copies per chromosome (default should be 2) - SNP filters: see Step 6 Notes: https://gencore.bio.nyu.edu/variant-calling-pipeline-gatk4/ - Indel filters: see Step 7 Notes: https://gencore.bio.nyu.edu/variant-calling-pipeline-gatk4/ #### SNPEFF notes: Get snpeff databases using `docker run --rm -ti gatk4-dev /bin/bash` then running `java -jar $SNPEFF_JAR databases`. Then, use the first column as SNPEFF input (e.g. \"hg38\"). - hg38, Homo_sapiens (USCS), http://downloads.sourceforge.net/project/snpeff/databases/v4_3/snpEff_v4_3_hg38.zip - mm10, Mus_musculus, http://downloads.sourceforge.net/project/snpeff/databases/v4_3/snpEff_v4_3_mm10.zip - dm6.03, Drosophila_melanogaster, http://downloads.sourceforge.net/project/snpeff/databases/v4_3/snpEff_v4_3_dm6.03.zip - Rnor_6.0.86, Rattus_norvegicus, http://downloads.sourceforge.net/project/snpeff/databases/v4_3/snpEff_v4_3_Rnor_6.0.86.zip - R64-1-1.86, Saccharomyces_cerevisiae, http://downloads.sourceforge.net/project/snpeff/databases/v4_3/snpEff_v4_3_R64-1-1.86.zip ### __Data Analysis Steps__ 1. Trimming the adapters with TrimGalore. - This step is particularly important when the reads are long and the fragments are short - resulting in sequencing adapters at the ends of reads. If adapter is not removed the read will not map. TrimGalore can recognize standard adapters, such as Illumina or Nextera/Tn5 adapters. 2. Generate quality control statistics of trimmed, unmapped sequence data 3. Run germline variant calling pipeline, custom wrapper script implementing Steps 1 - 17 of the Broad Institute's best practices gatk4 germline variant calling pipeline (https://gencore.bio.nyu.edu/variant-calling-pipeline-gatk4/) ### __References__ 1. https://gencore.bio.nyu.edu/variant-calling-pipeline-gatk4/ 2. https://gatk.broadinstitute.org/hc/en-us/articles/360035535932-Germline-short-variant-discovery-SNPs-Indels- 3. https://software.broadinstitute.org/software/igv/VCF

https://github.com/datirium/workflows.git

Path: workflows/vc-germline-pe.cwl

Branch/Commit ID: fa4f172486288a1a9d23864f1d6962d85a453e16

workflow graph workflow_input_format_expr_v1_1.cwl

https://github.com/common-workflow-language/cwl-utils.git

Path: testdata/workflow_input_format_expr_v1_1.cwl

Branch/Commit ID: 8058c7477097f90205dd7d8481781eb3737ea9c9

workflow graph count-lines6-wf.cwl

https://github.com/common-workflow-language/cwl-v1.2.git

Path: tests/count-lines6-wf.cwl

Branch/Commit ID: 7d7986a6e852ca6e3239c96d3a05dd536c76c903

workflow graph Trim Galore RNA-Seq pipeline single-read strand specific

Note: should be updated The original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **RNA-Seq** basic analysis for a **single-end** experiment. A corresponded input [FASTQ](http://maq.sourceforge.net/fastq.shtml) file has to be provided. Current workflow should be used only with the single-end RNA-Seq data. It performs the following steps: 1. Trim adapters from input FASTQ file 2. Use STAR to align reads from input FASTQ file according to the predefined reference indices; generate unsorted BAM file and alignment statistics file 3. Use fastx_quality_stats to analyze input FASTQ file and generate quality statistics file 4. Use samtools sort to generate coordinate sorted BAM(+BAI) file pair from the unsorted BAM file obtained on the step 1 (after running STAR) 5. Generate BigWig file on the base of sorted BAM file 6. Map input FASTQ file to predefined rRNA reference indices using Bowtie to define the level of rRNA contamination; export resulted statistics to file 7. Calculate isoform expression level for the sorted BAM file and GTF/TAB annotation file using GEEP reads-counting utility; export results to file

https://github.com/datirium/workflows.git

Path: workflows/trim-rnaseq-se-dutp.cwl

Branch/Commit ID: fa4f172486288a1a9d23864f1d6962d85a453e16

workflow graph Prodigal SWF

SubWorkflow for prodigal. Protein-coding gene prediction for prokaryotic genomes.

https://github.com/EBI-Metagenomics/emg-viral-pipeline.git

Path: cwl/src/Tools/Prodigal/prodigal_swf.cwl

Branch/Commit ID: b0ed3f07c8faced85609287759596ad83e154977

workflow graph Deprecated. RNA-Seq pipeline paired-end strand specific

The original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **RNA-Seq** basic analysis for a **paired-end** experiment. A corresponded input [FASTQ](http://maq.sourceforge.net/fastq.shtml) file has to be provided. Current workflow should be used only with the paired-end RNA-Seq data. It performs the following steps: 1. Use STAR to align reads from input FASTQ files according to the predefined reference indices; generate unsorted BAM file and alignment statistics file 2. Use fastx_quality_stats to analyze input FASTQ files and generate quality statistics files 3. Use samtools sort to generate coordinate sorted BAM(+BAI) file pair from the unsorted BAM file obtained on the step 1 (after running STAR) 4. Generate BigWig file on the base of sorted BAM file 5. Map input FASTQ files to predefined rRNA reference indices using Bowtie to define the level of rRNA contamination; export resulted statistics to file 6. Calculate isoform expression level for the sorted BAM file and GTF/TAB annotation file using GEEP reads-counting utility; export results to file

https://github.com/datirium/workflows.git

Path: workflows/rnaseq-pe-dutp.cwl

Branch/Commit ID: fa4f172486288a1a9d23864f1d6962d85a453e16

workflow graph step_valuefrom5_wf_v1_0.cwl

https://github.com/common-workflow-language/cwl-utils.git

Path: testdata/step_valuefrom5_wf_v1_0.cwl

Branch/Commit ID: 8058c7477097f90205dd7d8481781eb3737ea9c9

workflow graph workflow_input_sf_expr_array.cwl

https://github.com/common-workflow-language/cwl-utils.git

Path: testdata/workflow_input_sf_expr_array.cwl

Branch/Commit ID: 8058c7477097f90205dd7d8481781eb3737ea9c9