Explore Workflows

View already parsed workflows here or click here to add your own

Graph Name Retrieved From View
workflow graph Trim Galore ATAC-Seq pipeline paired-end

The original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **ChIP-Seq** basic analysis workflow for a **paired-end** experiment with Trim Galore. The pipeline was adapted for ATAC-Seq paired-end data analysis by updating genome coverage step. _Trim Galore_ is a wrapper around [Cutadapt](https://github.com/marcelm/cutadapt) and [FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) to consistently apply adapter and quality trimming to FastQ files, with extra functionality for RRBS data. A [FASTQ](http://maq.sourceforge.net/fastq.shtml) input file has to be provided. In outputs it returns coordinate sorted BAM file alongside with index BAI file, quality statistics for both the input FASTQ files, reads coverage in a form of BigWig file, peaks calling data in a form of narrowPeak or broadPeak files, islands with the assigned nearest genes and region type, data for average tag density plot (on the base of BAM file). Workflow starts with running fastx_quality_stats (steps fastx_quality_stats_upstream and fastx_quality_stats_downstream) from FASTX-Toolkit to calculate quality statistics for both upstream and downstream input FASTQ files. At the same time Bowtie is used to align reads from input FASTQ files to reference genome (Step bowtie_aligner). The output of this step is unsorted SAM file which is being sorted and indexed by samtools sort and samtools index (Step samtools_sort_index). Depending on workflow’s input parameters indexed and sorted BAM file could be processed by samtools rmdup (Step samtools_rmdup) to remove all possible read duplicates. In a case when removing duplicates is not necessary the step returns original input BAM and BAI files without any processing. If the duplicates were removed the following step (Step samtools_sort_index_after_rmdup) reruns samtools sort and samtools index with BAM and BAI files, if not - the step returns original unchanged input files. Right after that macs2 callpeak performs peak calling (Step macs2_callpeak). On the base of returned outputs the next step (Step macs2_island_count) calculates the number of islands and estimated fragment size. If the last one is less that 80 (hardcoded in a workflow) macs2 callpeak is rerun again with forced fixed fragment size value (Step macs2_callpeak_forced). If at the very beginning it was set in workflow input parameters to force run peak calling with fixed fragment size, this step is skipped and the original peak calling results are saved. In the next step workflow again calculates the number of islands and estimated fragment size (Step macs2_island_count_forced) for the data obtained from macs2_callpeak_forced step. If the last one was skipped the results from macs2_island_count_forced step are equal to the ones obtained from macs2_island_count step. Next step (Step macs2_stat) is used to define which of the islands and estimated fragment size should be used in workflow output: either from macs2_island_count step or from macs2_island_count_forced step. If input trigger of this step is set to True it means that macs2_callpeak_forced step was run and it returned different from macs2_callpeak step results, so macs2_stat step should return [fragments_new, fragments_old, islands_new], if trigger is False the step returns [fragments_old, fragments_old, islands_old], where sufix \"old\" defines results obtained from macs2_island_count step and sufix \"new\" - from macs2_island_count_forced step. The following two steps (Step bamtools_stats and bam_to_bigwig) are used to calculate coverage on the base of input BAM file and save it in BigWig format. For that purpose bamtools stats returns the number of mapped reads number which is then used as scaling factor by bedtools genomecov when it performs coverage calculation and saves it in BED format. The last one is then being sorted and converted to BigWig format by bedGraphToBigWig tool from UCSC utilities. To adapt the pipeline for ATAC-Seq data analysis we calculate genome coverage using only the first 9 bp from every read. Step get_stat is used to return a text file with statistics in a form of [TOTAL, ALIGNED, SUPRESSED, USED] reads count. Step island_intersect assigns genes and regions to the islands obtained from macs2_callpeak_forced. Step average_tag_density is used to calculate data for average tag density plot on the base of BAM file.

https://github.com/datirium/workflows.git

Path: workflows/trim-atacseq-pe.cwl

Branch/Commit ID: 2b8146f76595f0c4d8bf692de78b21280162b1d0

workflow graph Xenbase ChIP-Seq pipeline single-read

1. Convert input SRA file into FASTQ file (run fastq-dump) 2. Analyze quality of FASTQ file (run fastqc) 3. If any of the following fields in fastqc generated report is marked as failed: \"Per base sequence quality\", \"Per sequence quality scores\", \"Overrepresented sequences\", \"Adapter Content\", - trim adapters (run trimmomatic) 4. Align original or trimmed FASTQ file to reference genome (run Bowtie2) 5. Sort and index generated by Bowtie2 BAM file (run samtools sort, samtools index) 6. Remove duplicates in sorted BAM file (run picard) 7. Sort and index BAM file after duplicates removing (run samtools sort, samtools index) 8. Count mapped reads number in sorted BAM file (run bamtools stats) 9. Generate genome coverage BED file (run bedtools genomecov) 10. Sort genearted BED file (run sort) 11. Generate genome coverage bigWig file from BED file (run bedGraphToBigWig)

https://github.com/datirium/workflows.git

Path: workflows/xenbase-chipseq-se.cwl

Branch/Commit ID: 2b8146f76595f0c4d8bf692de78b21280162b1d0

workflow graph wgs alignment and tumor-only variant detection

https://github.com/genome/analysis-workflows.git

Path: definitions/pipelines/wgs.cwl

Branch/Commit ID: 6f9f8a2057c6a9f221a44559f671e87a75c70075

workflow graph gather AML trio outputs

https://github.com/genome/analysis-workflows.git

Path: definitions/pipelines/aml_trio_cle_gathered.cwl

Branch/Commit ID: 42c66dd24ce5026d3f717214ddb18b7b4fae93cf

workflow graph kmer_top_n

https://github.com/ncbi/pgap.git

Path: task_types/tt_kmer_top_n.cwl

Branch/Commit ID: 6d8d29a2156b93a75f1d1c6952738bd63f6bd98e

workflow graph WGS QC workflow mouse

https://github.com/genome/analysis-workflows.git

Path: definitions/subworkflows/qc_wgs_mouse.cwl

Branch/Commit ID: b7d9ace34664d3cedb16f2512c8a6dc6debfc8ca

workflow graph count-lines14-wf.cwl

https://github.com/common-workflow-language/cwl-v1.1.git

Path: tests/count-lines14-wf.cwl

Branch/Commit ID: 0e37d46e793e72b7c16b5ec03e22cb3ce1f55ba3

workflow graph tt_univec_wnode.cwl

https://github.com/ncbi/pgap.git

Path: task_types/tt_univec_wnode.cwl

Branch/Commit ID: 4533a5e930305c674057bc4cf5dda4f39d39b5df

workflow graph FASTQ to BQSR

https://github.com/genome/analysis-workflows.git

Path: definitions/subworkflows/fastq_to_bqsr.cwl

Branch/Commit ID: dc2c019c1aa24cc01b451a0f048cf94a35f163c4

workflow graph CUT&RUN/TAG Paired-end Workflow

A basic analysis workflow for paired-read CUT&RUN and CUT&TAG sequencing experiments. These sequencing library prep methods are ultra-sensitive chromatin mapping technologies compared to the ChIP-Seq methodology. Its primary benefits include 1) length filtering, 2) a higher signal-to-noise ratio, and 3) built-in normalization for between sample comparisons. This workflow utilizes the tool [SEACR (Sparse Enrichment Analysis of CUT&RUN data)](https://github.com/FredHutch/SEACR) which calls enriched regions in the target sequence data by identifying the top 1% of regions by area under the curve (of the alignment pileup). This workflow is loosely based on the [CUT-RUNTools-2.0 pipeline](https://github.com/fl-yu/CUT-RUNTools-2.0) pipeline, and the ChIP-Seq pipeline from [BioWardrobe](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) was used as a CWL template. ### __Inputs__ *General Info (required\*):* - Experiment short name/Alias* - a unique name for the sample (e.g. what was used on tubes while processing it) - Cells* - sample cell type or organism name - Conditions* - experimental condition name - Catalog # - catalog number for cells from vender/supplier - Primary [genome index](https://scidap.com/tutorials/basic/genome-indices) for peak calling* - preprocessed genome index of sample organism for primary alignment and peak calling - Secondary [genome index](https://scidap.com/tutorials/basic/genome-indices) for spike-in normalization* - preprocessed genome index of spike-in organism for secondary alignment (of unaligned reads from primary alignment) and spike-in normalization, default should be E. coli K-12 - FASTQ file for R1* - read 1 file of a pair-end library - FASTQ file for R2* - read 2 file of a pair-end library *Advanced:* - Number of bases to clip from the 3p end - used by bowtie aligner to trim <int> bases from 3' (right) end of reads - Number of bases to clip from the 5p end - used by bowtie aligner to trim <int> bases from 5' (left) end of reads - Call samtools rmdup to remove duplicates from sorted BAM file? - toggle on/off to remove duplicate reads from analysis - Fragment Length Filter will retain fragments between set base pair (bp) ranges for peak analysis - drop down menu - `Default_Range` retains fragments <1000 bp - `Histone_Binding_Library` retains fragments between 130-300 bp - `Transcription_Factor_Binding_Library` retains fragments <130 bp - Max distance (bp) from gene TSS (in both directions) overlapping which the peak will be assigned to the promoter region - default set to `1000` - Max distance (bp) from the promoter (only in upstream directions) overlapping which the peak will be assigned to the upstream region - default set to `20000` - Number of threads for steps that support multithreading - default set to `2` ### __Outputs__ Intermediate and final downloadable outputs include: - IGV with gene, BigWig (raw and normalized), and stringent peak tracks - quality statistics and visualizations for both R1/R2 input FASTQ files - coordinate sorted BAM file with associated BAI file for primary alignment - read pileup/coverage in BigWig format (raw and normalized) - cleaned bed files (containing fragment coordinates), and spike-in normalized SEACR peak-called BED files from both \"stringent\" and \"relaxed\" mode. - stringent peak call bed file with nearest gene annotations per peak ### __Data Analysis Steps__ 1. Trimming the adapters with TrimGalore. - This step is particularly important when the reads are long and the fragments are short - resulting in sequencing adapters at the ends of reads. If adapter is not removed the read will not map. TrimGalore can recognize standard adapters, such as Illumina or Nextera/Tn5 adapters. 2. Generate quality control statistics of trimmed, unmapped sequence data 3. (Optional) Clipping of 5' and/or 3' end by the specified number of bases. 4. Mapping reads to primary genome index with Bowtie. - Only uniquely mapped reads with less than 3 mismatches are used in the downstream analysis. Results are then sorted and indexed. Final outputs are in bam/bai format, which are also used to extrapolate effects of additional sequencing based on library complexity. 5. (Optional) Removal of duplicates (reads/pairs of reads mapping to exactly the same location). - This step is used to remove reads overamplified during amplification of the library. Unfortunately, it may also remove \"good\" reads. We usually do not remove duplicates unless the library is heavily duplicated. 6. Mapping unaligned reads from primary alignment to secondary genome index with Bowtie. - This step is used to obtain the number of reads for normalization, used to scale the pileups from the primary alignment. After normalization, sample pileups/peak may then be appropriately compared to one another assuming an equal use of spike-in material during library preparation. Note the default genome index for this step should be *E. coli* K-12 if no spike-in material was called out in the library protocol. Refer to [Step 16](https://www.protocols.io/view/cut-amp-tag-data-processing-and-analysis-tutorial-e6nvw93x7gmk/v1?step=16#step-4A3D8C70DC3011EABA5FF3676F0827C5) of the \"CUT&Tag Data Processing and Analysis Tutorial\" by Zheng Y et al (2020). Protocol.io. 7. Formatting alignment file to account for fragments based on paired-end BAM. - Generates a filtered and normalized bed file to be used as input for SEACR peak calling. 8. Call enriched regions using SEACR. - This step used both stringent and relaxed peak calling modes with a FDR (false discovery rate) of 0.01, and no normalization to a control sample. The output of SEACR is the [called peaks BED format file](https://github.com/FredHutch/SEACR#description-of-output-fields). 9. Generation and formatting of output files. - This step collects read, alignment, and peak statistics, as well asgenerates BigWig coverage/pileup files for display on the browser using IGV. The coverage shows the number of fragments that cover each base in the genome both normalized and unnormalized to the calculated spike-in scaling factor. ### __References__ - Meers MP, Tenenbaum D, Henikoff S. (2019). Peak calling by Sparse Enrichment Analysis for CUT&RUN chromatin profiling. Epigenetics and Chromatin 12(1):42. - Langmead B, Trapnell C, Pop M, Salzberg SL. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol 10:R25.

https://github.com/datirium/workflows.git

Path: workflows/cutandrun-pe.cwl

Branch/Commit ID: bf80c9339d81a78aefb8de661bff998ed86e836e