Explore Workflows

View already parsed workflows here or click here to add your own

Graph Name Retrieved From View
workflow graph step-valuefrom3-wf_v1_2.cwl

https://github.com/common-workflow-language/cwl-utils.git

Path: testdata/step-valuefrom3-wf_v1_2.cwl

Branch/Commit ID: 5759b4275906e6cfe13912c8426de2a2237cb4b0

workflow graph Trim Galore ATAC-Seq pipeline paired-end

This ATAC pipeline is based on original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **ChIP-Seq** basic analysis workflow for a **paired-end** experiment with Trim Galore. The pipeline was adapted for ATAC-Seq single-read data analysis by updating genome coverage step. ### Data Analysis Steps SciDAP starts from the .fastq files which most DNA cores and commercial NGS companies return. Starting from raw data allows us to ensure that all experiments have been processed in the same way and simplifies the deposition of data to GEO upon publication. The data can be uploaded from users computer, downloaded directly from an ftp server of the core facility by providing a URL or from GEO by providing SRA accession number. Our current pipelines include the following steps: 1. Trimming the adapters with TrimGalore. This step is particularly important when the reads are long and the fragments are short as in ATAC -resulting in sequencing adapters at the end of read. If adapter is not removed the read will not map. TrimGalore can recognize standard adapters, such as Nexterra/Tn5 adapters. 2. QC 3. (Optional) trimming adapters on 5' or 3' end by the specified number of bases. 4. Mapping reads with BowTie. Only uniquely mapped reads with less than 3 mismatches are used in the downstream analysis. Results are saved as a .bam file. 5. Reads mapping to chromosome M are removed. Since there are many copies of chromosome M in the cell and it is not protected by histones, some ATAC libraries have up to 50% of reads mapping to chrM. We recommend using OMNI-ATAC protocol that reduces chrM reads and provides better specificity. 6. (Optional) Removal of duplicates (reads/pairs of reads mapping to exactly same location). This step is used to remove reads overamplified in PCR. Unfortunately, it may also remove \"good\" reads. We usually do not remove duplicates unless the library is heavily duplicated. Please note that MACS2 will remove 'excessive' duplicates during peak calling ina smart way (those not supported by other nearby reads). 7. Peakcalling by MACS2. (Optionally), it is possible to specify read extension length for MACS2 to use if the length determined automatically is wrong. 8. Generation of BigWig coverage files for display on the browser. Since the cuts by the Tn5 transposome are 9bp apart, we show coverage by 9bp reads rather than fragments as in ChIP-Seq. The coverage shows the number of fragments at each base in the genome normalized to the number of millions of mapped reads. This way the peak of coverage will be located at the most accessible site. ### Details _Trim Galore_ is a wrapper around [Cutadapt](https://github.com/marcelm/cutadapt) and [FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) to consistently apply adapter and quality trimming to FastQ files, with extra functionality for RRBS data. A [FASTQ](http://maq.sourceforge.net/fastq.shtml) input file has to be provided. In outputs it returns coordinate sorted BAM file alongside with index BAI file, quality statistics for both the input FASTQ files, reads coverage in a form of BigWig file, peaks calling data in a form of narrowPeak or broadPeak files, islands with the assigned nearest genes and region type, data for average tag density plot (on the base of BAM file). Workflow starts with running fastx_quality_stats (steps fastx_quality_stats_upstream and fastx_quality_stats_downstream) from FASTX-Toolkit to calculate quality statistics for both upstream and downstream input FASTQ files. At the same time Bowtie is used to align reads from input FASTQ files to reference genome (Step bowtie_aligner). The output of this step is unsorted SAM file which is being sorted and indexed by samtools sort and samtools index (Step samtools_sort_index). Depending on workflow’s input parameters indexed and sorted BAM file could be processed by samtools rmdup (Step samtools_rmdup) to remove all possible read duplicates. In a case when removing duplicates is not necessary the step returns original input BAM and BAI files without any processing. If the duplicates were removed the following step (Step samtools_sort_index_after_rmdup) reruns samtools sort and samtools index with BAM and BAI files, if not - the step returns original unchanged input files. Right after that macs2 callpeak performs peak calling (Step macs2_callpeak). On the base of returned outputs the next step (Step macs2_island_count) calculates the number of islands and estimated fragment size. If the last one is less that 80 (hardcoded in a workflow) macs2 callpeak is rerun again with forced fixed fragment size value (Step macs2_callpeak_forced). If at the very beginning it was set in workflow input parameters to force run peak calling with fixed fragment size, this step is skipped and the original peak calling results are saved. In the next step workflow again calculates the number of islands and estimated fragment size (Step macs2_island_count_forced) for the data obtained from macs2_callpeak_forced step. If the last one was skipped the results from macs2_island_count_forced step are equal to the ones obtained from macs2_island_count step. Next step (Step macs2_stat) is used to define which of the islands and estimated fragment size should be used in workflow output: either from macs2_island_count step or from macs2_island_count_forced step. If input trigger of this step is set to True it means that macs2_callpeak_forced step was run and it returned different from macs2_callpeak step results, so macs2_stat step should return [fragments_new, fragments_old, islands_new], if trigger is False the step returns [fragments_old, fragments_old, islands_old], where sufix \"old\" defines results obtained from macs2_island_count step and sufix \"new\" - from macs2_island_count_forced step. The following two steps (Step bamtools_stats and bam_to_bigwig) are used to calculate coverage on the base of input BAM file and save it in BigWig format. For that purpose bamtools stats returns the number of mapped reads number which is then used as scaling factor by bedtools genomecov when it performs coverage calculation and saves it in BED format. The last one is then being sorted and converted to BigWig format by bedGraphToBigWig tool from UCSC utilities. To adapt the pipeline for ATAC-Seq data analysis we calculate genome coverage using only the first 9 bp from every read. Step get_stat is used to return a text file with statistics in a form of [TOTAL, ALIGNED, SUPRESSED, USED] reads count. Step island_intersect assigns genes and regions to the islands obtained from macs2_callpeak_forced. Step average_tag_density is used to calculate data for average tag density plot on the base of BAM file.

https://github.com/datirium/workflows.git

Path: workflows/trim-atacseq-pe.cwl

Branch/Commit ID: c6bfa0de917efb536dd385624fc7702e6748e61d

workflow graph workflow_same_level_v2.cwl#main_pipeline

Simulation steps pipeline

https://github.com/ILIAD-ocean-twin/application_package.git

Path: workflow_in_workflow/workflow_same_level_v2.cwl

Branch/Commit ID: 2f678aa688683e20169abaaec9166b4a32403523

Packed ID: main_pipeline

workflow graph GSEApy - Gene Set Enrichment Analysis in Python

GSEAPY: Gene Set Enrichment Analysis in Python ============================================== Gene Set Enrichment Analysis is a computational method that determines whether an a priori defined set of genes shows statistically significant, concordant differences between two biological states (e.g. phenotypes). GSEA requires as input an expression dataset, which contains expression profiles for multiple samples. While the software supports multiple input file formats for these datasets, the tab-delimited GCT format is the most common. The first column of the GCT file contains feature identifiers (gene ids or symbols in the case of data derived from RNA-Seq experiments). The second column contains a description of the feature; this column is ignored by GSEA and may be filled with “NA”s. Subsequent columns contain the expression values for each feature, with one sample's expression value per column. It is important to note that there are no hard and fast rules regarding how a GCT file's expression values are derived. The important point is that they are comparable to one another across features within a sample and comparable to one another across samples. Tools such as DESeq2 can be made to produce properly normalized data (normalized counts) which are compatible with GSEA. Documents ============================================== - GSEA Home Page: https://www.gsea-msigdb.org/gsea/index.jsp - Results Interpretation: https://www.gsea-msigdb.org/gsea/doc/GSEAUserGuideTEXT.htm#_Interpreting_GSEA_Results - GSEA User Guide: https://gseapy.readthedocs.io/en/latest/faq.html - GSEAPY Docs: https://gseapy.readthedocs.io/en/latest/introduction.html References ============================================== - Subramanian, Tamayo, et al. (2005, PNAS), https://www.pnas.org/content/102/43/15545 - Mootha, Lindgren, et al. (2003, Nature Genetics), http://www.nature.com/ng/journal/v34/n3/abs/ng1180.html - Chen EY, Tan CM, Kou Y, Duan Q, Wang Z, Meirelles GV, Clark NR, Ma'ayan A. Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool. BMC Bioinformatics. 2013; 128(14). - Kuleshov MV, Jones MR, Rouillard AD, Fernandez NF, Duan Q, Wang Z, Koplev S, Jenkins SL, Jagodnik KM, Lachmann A, McDermott MG, Monteiro CD, Gundersen GW, Ma'ayan A. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Research. 2016; gkw377 . - Xie Z, Bailey A, Kuleshov MV, Clarke DJB., Evangelista JE, Jenkins SL, Lachmann A, Wojciechowicz ML, Kropiwnicki E, Jagodnik KM, Jeon M, & Ma’ayan A. Gene set knowledge discovery with Enrichr. Current Protocols, 1, e90. 2021. doi: 10.1002/cpz1.90

https://github.com/datirium/workflows.git

Path: workflows/gseapy.cwl

Branch/Commit ID: 69643d8c15f5357a320aa7e2f6adb2e71302fd20

workflow graph binning.cwl

https://github.com/EBI-Metagenomics/CWL-binning.git

Path: workflows/binning.cwl

Branch/Commit ID: 895abf7343a1039e5fd7a509af082425009be2dd

workflow graph chksum_xam_to_interleaved_fq.cwl

https://github.com/cancerit/workflow-seq-import.git

Path: cwls/chksum_xam_to_interleaved_fq.cwl

Branch/Commit ID: 2e592d7253c814aae74e12934211d01da21e957e

workflow graph rna amplicon analysis for fasta files

RNAs - qc, preprocess, annotation, index, abundance

https://github.com/MG-RAST/pipeline.git

Path: CWL/Workflows/amplicon-fasta.workflow.cwl

Branch/Commit ID: f906212e2c9a88280ae36545e5422f25752aa8f4

workflow graph conflict-wf.cwl#collision

https://github.com/common-workflow-language/cwltool.git

Path: cwltool/schemas/v1.0/v1.0/conflict-wf.cwl

Branch/Commit ID: bfe56f3138e9e6fc0b9b8c06447553d4cea03d59

Packed ID: collision

workflow graph bam to trimmed fastqs and biscuit alignments

https://github.com/genome/analysis-workflows.git

Path: definitions/subworkflows/bam_to_trimmed_fastq_and_biscuit_alignments.cwl

Branch/Commit ID: a3e26136043c03192c38c335316d2d36e3e67478

workflow graph tt_kmer_top_n.cwl

https://github.com/ncbi/pgap.git

Path: task_types/tt_kmer_top_n.cwl

Branch/Commit ID: c64599f5db2437f9323d41cc3d8d9efb20a2667e