Explore Workflows

View already parsed workflows here or click here to add your own

Graph Name Retrieved From View
workflow graph bgzip and index VCF

https://github.com/genome/analysis-workflows.git

Path: definitions/subworkflows/bgzip_and_index.cwl

Branch/Commit ID: 0a9a4ce83b49ed4e7eee5bcc09d83725136a36b0

workflow graph iwdr_with_nested_dirs.cwl

https://github.com/common-workflow-language/cwltool.git

Path: cwltool/schemas/v1.0/v1.0/iwdr_with_nested_dirs.cwl

Branch/Commit ID: 203797516329f7fb8aa5e763e6f9b331c63c3060

workflow graph kmer_ref_compare_wnode

https://github.com/ncbi/pgap.git

Path: task_types/tt_kmer_ref_compare_wnode.cwl

Branch/Commit ID: e3f18c61d1bbf65e40921dbd044369da4523ee3e

workflow graph chipseq-pe.cwl

Runs ChIP-Seq BioWardrobe basic analysis with paired-end input data files.

https://github.com/Barski-lab/workflows.git

Path: workflows/chipseq-pe.cwl

Branch/Commit ID: ea2a2ab57710fcf067f67305f3dd6ad29094da1a

workflow graph heatmap-prepare.cwl

Workflow runs homer-make-tag-directory.cwl tool using scatter for the following inputs - bam_file - fragment_size - total_reads `dotproduct` is used as a `scatterMethod`, so one element will be taken from each array to construct each job: 1) bam_file[0] fragment_size[0] total_reads[0] 2) bam_file[1] fragment_size[1] total_reads[1] ... N) bam_file[N] fragment_size[N] total_reads[N] `bam_file`, `fragment_size` and `total_reads` arrays should have the identical order.

https://github.com/datirium/workflows.git

Path: tools/heatmap-prepare.cwl

Branch/Commit ID: 2c486543c335bb99b245dfe7e2f033f535efb9cf

workflow graph kmer_cache_store

https://github.com/ncbi/pgap.git

Path: task_types/tt_kmer_cache_store.cwl

Branch/Commit ID: 72c3091012f5c2dce38ad9213cda617d2c7a61ac

workflow graph xenbase-chipseq-pe.cwl

XenBase workflow for analysing ChIP-Seq paired-end data

https://github.com/datirium/workflows.git

Path: workflows/xenbase-chipseq-pe.cwl

Branch/Commit ID: 4b8bb1a1ec39056253ca8eee976078e51f4a954e

workflow graph process VCF workflow

https://github.com/genome/analysis-workflows.git

Path: definitions/subworkflows/strelka_process_vcf.cwl

Branch/Commit ID: 51724b44c96e5fd849ae55b752865b80bc47d66c

workflow graph scatter-valuefrom-wf3.cwl#main

https://github.com/common-workflow-language/cwltool.git

Path: cwltool/schemas/v1.0/v1.0/scatter-valuefrom-wf3.cwl

Branch/Commit ID: 203797516329f7fb8aa5e763e6f9b331c63c3060

Packed ID: main

workflow graph ChIP-Seq pipeline paired-end

The original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **ChIP-Seq** basic analysis workflow for a **paired-end** experiment. A [FASTQ](http://maq.sourceforge.net/fastq.shtml) input file has to be provided. The pipeline produces a sorted BAM file alongside with index BAI file, quality statistics of the input FASTQ file, coverage by estimated fragments as a BigWig file, peaks calling data in a form of narrowPeak or broadPeak files, islands with the assigned nearest genes and region type, data for average tag density plot. Workflow starts with step *fastx\_quality\_stats* from FASTX-Toolkit to calculate quality statistics for input FASTQ file. At the same time `bowtie` is used to align reads from input FASTQ file to reference genome *bowtie\_aligner*. The output of this step is an unsorted SAM file which is being sorted and indexed by `samtools sort` and `samtools index` *samtools\_sort\_index*. Depending on workflow’s input parameters indexed and sorted BAM file can be processed by `samtools rmdup` *samtools\_rmdup* to get rid of duplicated reads. If removing duplicates is not required the original BAM and BAI files are returned. Otherwise step *samtools\_sort\_index\_after\_rmdup* repeat `samtools sort` and `samtools index` with BAM and BAI files without duplicates. Next `macs2 callpeak` performs peak calling *macs2\_callpeak* and the next step reports *macs2\_island\_count* the number of islands and estimated fragment size. If the latter is less that 80bp (hardcoded in the workflow) `macs2 callpeak` is rerun again with forced fixed fragment size value (*macs2\_callpeak\_forced*). It is also possible to force MACS2 to use pre set fragment size in the first place. Next step (*macs2\_stat*) is used to define which of the islands and estimated fragment size should be used in workflow output: either from *macs2\_island\_count* step or from *macs2\_island\_count\_forced* step. If input trigger of this step is set to True it means that *macs2\_callpeak\_forced* step was run and it returned different from *macs2\_callpeak* step results, so *macs2\_stat* step should return [fragments\_new, fragments\_old, islands\_new], if trigger is False the step returns [fragments\_old, fragments\_old, islands\_old], where sufix \"old\" defines results obtained from *macs2\_island\_count* step and sufix \"new\" - from *macs2\_island\_count\_forced* step. The following two steps (*bamtools\_stats* and *bam\_to\_bigwig*) are used to calculate coverage from BAM file and save it in BigWig format. For that purpose bamtools stats returns the number of mapped reads which is then used as scaling factor by bedtools genomecov when it performs coverage calculation and saves it as a BEDgraph file whichis then sorted and converted to BigWig format by bedGraphToBigWig tool from UCSC utilities. Step *get\_stat* is used to return a text file with statistics in a form of [TOTAL, ALIGNED, SUPRESSED, USED] reads count. Step *island\_intersect* assigns nearest genes and regions to the islands obtained from *macs2\_callpeak\_forced*. Step *average\_tag\_density* is used to calculate data for average tag density plot from the BAM file.

https://github.com/datirium/workflows.git

Path: workflows/chipseq-pe.cwl

Branch/Commit ID: a68821bf3a9ceadc3b2ffbb535d601d9a645b377