Explore Workflows

View already parsed workflows here or click here to add your own

Graph Name Retrieved From View
workflow graph indexing_bed

https://gitlab.bsc.es/lrodrig1/structuralvariants_poc.git

Path: structuralvariants/subworkflows/indexing_bed.cwl

Branch/Commit ID: 86f2f3fb64e916607637d93cf6715bab90b1f1d3

workflow graph kmer_top_n_extract

https://github.com/ncbi/pgap.git

Path: task_types/tt_kmer_top_n_extract.cwl

Branch/Commit ID: be9d12a3f8e1924183a1dc6a0bda6ada4195ca71

workflow graph kmer_cache_store

https://github.com/ncbi/pgap.git

Path: task_types/tt_kmer_cache_store.cwl

Branch/Commit ID: b4a6e46405c08e0b14ad92f0ab38bcc4a69caa5c

workflow graph PCA - Principal Component Analysis

Principal Component Analysis --------------- Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. The calculation is done by a singular value decomposition of the (centered and possibly scaled) data matrix, not by using eigen on the covariance matrix. This is generally the preferred method for numerical accuracy.

https://github.com/datirium/workflows.git

Path: workflows/pca.cwl

Branch/Commit ID: 36fd18f11e939d3908b1eca8d2939402f7a99b0f

workflow graph kmer_gc_extract_wnode

https://github.com/ncbi/pgap.git

Path: task_types/tt_kmer_gc_extract_wnode.cwl

Branch/Commit ID: 6d8d29a2156b93a75f1d1c6952738bd63f6bd98e

workflow graph tt_blastn_wnode

https://github.com/ncbi/pgap.git

Path: task_types/tt_blastn_wnode.cwl

Branch/Commit ID: f403d9e26d60d3e3591a03077bc9dfa188b1c2bb

workflow graph PGAP Pipeline, simple user input, PGAPX-134

PGAP pipeline for external usage, powered via containers, simple user input: (FASTA + yaml only, no template)

https://github.com/ncbi/pgap.git

Path: pgap.cwl

Branch/Commit ID: be9d12a3f8e1924183a1dc6a0bda6ada4195ca71

workflow graph Unaligned bam to sorted, markduped bam

https://github.com/genome/analysis-workflows.git

Path: definitions/subworkflows/align_sort_markdup.cwl

Branch/Commit ID: ae57b60e9b01e3f0f02f4e828042748409dff5a3

workflow graph ChIP-Seq pipeline paired-end

The original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **ChIP-Seq** basic analysis workflow for a **paired-end** experiment. A [FASTQ](http://maq.sourceforge.net/fastq.shtml) input file has to be provided. The pipeline produces a sorted BAM file alongside with index BAI file, quality statistics of the input FASTQ file, coverage by estimated fragments as a BigWig file, peaks calling data in a form of narrowPeak or broadPeak files, islands with the assigned nearest genes and region type, data for average tag density plot. Workflow starts with step *fastx\_quality\_stats* from FASTX-Toolkit to calculate quality statistics for input FASTQ file. At the same time `bowtie` is used to align reads from input FASTQ file to reference genome *bowtie\_aligner*. The output of this step is an unsorted SAM file which is being sorted and indexed by `samtools sort` and `samtools index` *samtools\_sort\_index*. Depending on workflow’s input parameters indexed and sorted BAM file can be processed by `samtools rmdup` *samtools\_rmdup* to get rid of duplicated reads. If removing duplicates is not required the original BAM and BAI files are returned. Otherwise step *samtools\_sort\_index\_after\_rmdup* repeat `samtools sort` and `samtools index` with BAM and BAI files without duplicates. Next `macs2 callpeak` performs peak calling *macs2\_callpeak* and the next step reports *macs2\_island\_count* the number of islands and estimated fragment size. If the latter is less that 80bp (hardcoded in the workflow) `macs2 callpeak` is rerun again with forced fixed fragment size value (*macs2\_callpeak\_forced*). It is also possible to force MACS2 to use pre set fragment size in the first place. Next step (*macs2\_stat*) is used to define which of the islands and estimated fragment size should be used in workflow output: either from *macs2\_island\_count* step or from *macs2\_island\_count\_forced* step. If input trigger of this step is set to True it means that *macs2\_callpeak\_forced* step was run and it returned different from *macs2\_callpeak* step results, so *macs2\_stat* step should return [fragments\_new, fragments\_old, islands\_new], if trigger is False the step returns [fragments\_old, fragments\_old, islands\_old], where sufix \"old\" defines results obtained from *macs2\_island\_count* step and sufix \"new\" - from *macs2\_island\_count\_forced* step. The following two steps (*bamtools\_stats* and *bam\_to\_bigwig*) are used to calculate coverage from BAM file and save it in BigWig format. For that purpose bamtools stats returns the number of mapped reads which is then used as scaling factor by bedtools genomecov when it performs coverage calculation and saves it as a BEDgraph file whichis then sorted and converted to BigWig format by bedGraphToBigWig tool from UCSC utilities. Step *get\_stat* is used to return a text file with statistics in a form of [TOTAL, ALIGNED, SUPRESSED, USED] reads count. Step *island\_intersect* assigns nearest genes and regions to the islands obtained from *macs2\_callpeak\_forced*. Step *average\_tag\_density* is used to calculate data for average tag density plot from the BAM file.

https://github.com/datirium/workflows.git

Path: workflows/chipseq-pe.cwl

Branch/Commit ID: 36fd18f11e939d3908b1eca8d2939402f7a99b0f

workflow graph SV filtering workflow

https://github.com/genome/analysis-workflows.git

Path: definitions/subworkflows/filter_sv_vcf.cwl

Branch/Commit ID: 60edaf6f57eaaf02cda1a3d8cb9a825aa64a43e2