Explore Workflows

View already parsed workflows here or click here to add your own

Graph Name Retrieved From View
workflow graph DESeq - differential gene expression analysis

Differential gene expression analysis ===================================== Differential gene expression analysis based on the negative binomial distribution Estimate variance-mean dependence in count data from high-throughput sequencing assays and test for differential expression based on a model using the negative binomial distribution. DESeq1 ------ High-throughput sequencing assays such as RNA-Seq, ChIP-Seq or barcode counting provide quantitative readouts in the form of count data. To infer differential signal in such data correctly and with good statistical power, estimation of data variability throughout the dynamic range and a suitable error model are required. Simon Anders and Wolfgang Huber propose a method based on the negative binomial distribution, with variance and mean linked by local regression and present an implementation, [DESeq](http://bioconductor.org/packages/release/bioc/html/DESeq.html), as an R/Bioconductor package DESeq2 ------ In comparative high-throughput sequencing assays, a fundamental task is the analysis of count data, such as read counts per gene in RNA-seq, for evidence of systematic changes across experimental conditions. Small replicate numbers, discreteness, large dynamic range and the presence of outliers require a suitable statistical approach. [DESeq2](http://www.bioconductor.org/packages/release/bioc/html/DESeq2.html), a method for differential analysis of count data, using shrinkage estimation for dispersions and fold changes to improve stability and interpretability of estimates. This enables a more quantitative analysis focused on the strength rather than the mere presence of differential expression.

https://github.com/datirium/workflows.git

Path: workflows/deseq.cwl

Branch/Commit ID: 4360fb2e778ecee42e5f78f83b78c65ab3a2b1df

workflow graph conflict.cwl#main

https://github.com/common-workflow-language/cwltool.git

Path: tests/wf/conflict.cwl

Branch/Commit ID: aec33fcfa3459a90cbba8c88ebb991be94d21429

Packed ID: main

workflow graph samtools_view_sam2bam

https://gitlab.bsc.es/lrodrig1/structuralvariants_poc.git

Path: structuralvariants/cwl/subworkflows/samtools_view_sam2bam.cwl

Branch/Commit ID: a4a3547b9790e99a58424a0dfcb4e467a7691d6a

workflow graph cnv_manta

CNV Manta calling

https://gitlab.bsc.es/lrodrig1/structuralvariants_poc.git

Path: structuralvariants/cwl/subworkflows/cnv_manta.cwl

Branch/Commit ID: a4a3547b9790e99a58424a0dfcb4e467a7691d6a

workflow graph Trim Galore ChIP-Seq pipeline paired-end

The original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **ChIP-Seq** basic analysis workflow for a **paired-end** experiment with Trim Galore. _Trim Galore_ is a wrapper around [Cutadapt](https://github.com/marcelm/cutadapt) and [FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) to consistently apply adapter and quality trimming to FastQ files, with extra functionality for RRBS data. A [FASTQ](http://maq.sourceforge.net/fastq.shtml) input file has to be provided. In outputs it returns coordinate sorted BAM file alongside with index BAI file, quality statistics for both the input FASTQ files, reads coverage in a form of BigWig file, peaks calling data in a form of narrowPeak or broadPeak files, islands with the assigned nearest genes and region type, data for average tag density plot (on the base of BAM file). Workflow starts with running fastx_quality_stats (steps fastx_quality_stats_upstream and fastx_quality_stats_downstream) from FASTX-Toolkit to calculate quality statistics for both upstream and downstream input FASTQ files. At the same time Bowtie is used to align reads from input FASTQ files to reference genome (Step bowtie_aligner). The output of this step is unsorted SAM file which is being sorted and indexed by samtools sort and samtools index (Step samtools_sort_index). Depending on workflow’s input parameters indexed and sorted BAM file could be processed by samtools rmdup (Step samtools_rmdup) to remove all possible read duplicates. In a case when removing duplicates is not necessary the step returns original input BAM and BAI files without any processing. If the duplicates were removed the following step (Step samtools_sort_index_after_rmdup) reruns samtools sort and samtools index with BAM and BAI files, if not - the step returns original unchanged input files. Right after that macs2 callpeak performs peak calling (Step macs2_callpeak). On the base of returned outputs the next step (Step macs2_island_count) calculates the number of islands and estimated fragment size. If the last one is less that 80 (hardcoded in a workflow) macs2 callpeak is rerun again with forced fixed fragment size value (Step macs2_callpeak_forced). If at the very beginning it was set in workflow input parameters to force run peak calling with fixed fragment size, this step is skipped and the original peak calling results are saved. In the next step workflow again calculates the number of islands and estimated fragment size (Step macs2_island_count_forced) for the data obtained from macs2_callpeak_forced step. If the last one was skipped the results from macs2_island_count_forced step are equal to the ones obtained from macs2_island_count step. Next step (Step macs2_stat) is used to define which of the islands and estimated fragment size should be used in workflow output: either from macs2_island_count step or from macs2_island_count_forced step. If input trigger of this step is set to True it means that macs2_callpeak_forced step was run and it returned different from macs2_callpeak step results, so macs2_stat step should return [fragments_new, fragments_old, islands_new], if trigger is False the step returns [fragments_old, fragments_old, islands_old], where sufix \"old\" defines results obtained from macs2_island_count step and sufix \"new\" - from macs2_island_count_forced step. The following two steps (Step bamtools_stats and bam_to_bigwig) are used to calculate coverage on the base of input BAM file and save it in BigWig format. For that purpose bamtools stats returns the number of mapped reads number which is then used as scaling factor by bedtools genomecov when it performs coverage calculation and saves it in BED format. The last one is then being sorted and converted to BigWig format by bedGraphToBigWig tool from UCSC utilities. Step get_stat is used to return a text file with statistics in a form of [TOTAL, ALIGNED, SUPRESSED, USED] reads count. Step island_intersect assigns genes and regions to the islands obtained from macs2_callpeak_forced. Step average_tag_density is used to calculate data for average tag density plot on the base of BAM file.

https://github.com/datirium/workflows.git

Path: workflows/trim-chipseq-pe.cwl

Branch/Commit ID: 9bf0aa495735f8081bb5870cb32fc898b9e6eb22

workflow graph Indices builder from GBOL RDF (TTL)

Workflow to build different indices for different tools from a genome and transcriptome. This workflow expects an (annotated) genome in GBOL ttl format. Steps: - SAPP: rdf2gtf (genome fasta) - SAPP: rdf2fasta (transcripts fasta) - STAR index (Optional for Eukaryotic origin) - bowtie2 index - kallisto index

https://git.wur.nl/unlock/cwl.git

Path: cwl/workflows/workflow_indexbuilder.cwl

Branch/Commit ID: d944d61ddc34a5b24ebac6e1701efd6f8fdf54ae

workflow graph cnv_exomedepth

CNV ExomeDepth calling

https://gitlab.bsc.es/lrodrig1/structuralvariants_poc.git

Path: structuralvariants/cwl/subworkflows/cnv_exome_depth.cwl

Branch/Commit ID: f248ac3ccbf6840af721251c7e9451abd9b2c09f

workflow graph cnv_codex

CNV CODEX calling

https://gitlab.bsc.es/lrodrig1/structuralvariants_poc.git

Path: structuralvariants/cwl/subworkflows/cnv_codex.cwl

Branch/Commit ID: d2314468d2d2ec177d278899820de1cbfe8c8fb6

workflow graph Trim Galore ATAC-Seq pipeline single-read

The original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **ChIP-Seq** basic analysis workflow for a **single-read** experiment with Trim Galore. The pipeline was adapted for ATAC-Seq single-read data analysis by updating genome coverage step. _Trim Galore_ is a wrapper around [Cutadapt](https://github.com/marcelm/cutadapt) and [FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) to consistently apply adapter and quality trimming to FastQ files, with extra functionality for RRBS data. In outputs it returns coordinate sorted BAM file alongside with index BAI file, quality statistics of the input FASTQ file, reads coverage in a form of BigWig file, peaks calling data in a form of narrowPeak or broadPeak files, islands with the assigned nearest genes and region type, data for average tag density plot (on the base of BAM file). Workflow starts with step *fastx\_quality\_stats* from FASTX-Toolkit to calculate quality statistics for input FASTQ file. At the same time `bowtie` is used to align reads from input FASTQ file to reference genome *bowtie\_aligner*. The output of this step is unsorted SAM file which is being sorted and indexed by `samtools sort` and `samtools index` *samtools\_sort\_index*. Based on workflow’s input parameters indexed and sorted BAM file can be processed by `samtools rmdup` *samtools\_rmdup* to get rid of duplicated reads. If removing duplicates is not required the original input BAM and BAI files return. Otherwise step *samtools\_sort\_index\_after\_rmdup* repeat `samtools sort` and `samtools index` with BAM and BAI files. Right after that `macs2 callpeak` performs peak calling *macs2\_callpeak*. On the base of returned outputs the next step *macs2\_island\_count* calculates the number of islands and estimated fragment size. If the last one is less that 80bp (hardcoded in the workflow) `macs2 callpeak` is rerun again with forced fixed fragment size value (*macs2\_callpeak\_forced*). If at the very beginning it was set in workflow input parameters to force run peak calling with fixed fragment size, this step is skipped and the original peak calling results are saved. In the next step workflow again calculates the number of islands and estimates fragment size (*macs2\_island\_count\_forced*) for the data obtained from *macs2\_callpeak\_forced* step. If the last one was skipped the results from *macs2\_island\_count\_forced* step are equal to the ones obtained from *macs2\_island\_count* step. Next step (*macs2\_stat*) is used to define which of the islands and estimated fragment size should be used in workflow output: either from *macs2\_island\_count* step or from *macs2\_island\_count\_forced* step. If input trigger of this step is set to True it means that *macs2\_callpeak\_forced* step was run and it returned different from *macs2\_callpeak* step results, so *macs2\_stat* step should return [fragments\_new, fragments\_old, islands\_new], if trigger is False the step returns [fragments\_old, fragments\_old, islands\_old], where sufix \"old\" defines results obtained from *macs2\_island\_count* step and sufix \"new\" - from *macs2\_island\_count\_forced* step. The following two steps (*bamtools\_stats* and *bam\_to\_bigwig*) are used to calculate coverage on the base of input BAM file and save it in BigWig format. For that purpose bamtools stats returns the number of mapped reads number which is then used as scaling factor by bedtools genomecov when it performs coverage calculation and saves it in BED format. The last one is then being sorted and converted to BigWig format by bedGraphToBigWig tool from UCSC utilities. To adapt the pipeline for ATAC-Seq data analysis we calculate genome coverage using only the first 9 bp from every read. Step *get\_stat* is used to return a text file with statistics in a form of [TOTAL, ALIGNED, SUPRESSED, USED] reads count. Step *island\_intersect* assigns genes and regions to the islands obtained from *macs2\_callpeak\_forced*. Step *average\_tag\_density* is used to calculate data for average tag density plot on the base of BAM file.

https://github.com/datirium/workflows.git

Path: workflows/trim-atacseq-se.cwl

Branch/Commit ID: 9bf0aa495735f8081bb5870cb32fc898b9e6eb22

workflow graph Subsample BAM file creating a tagAlign and pseudoreplicates

This workflow creates a subsample from a BAM file creating a tagAlign and pseudoreplicates

https://github.com/ncbi/cwl-ngs-workflows-cbb.git

Path: workflows/File-formats/subample-pseudoreplicates.cwl

Branch/Commit ID: 433720e6ba8c2d85b15de3ffb9ce1236f08978a4